Fuzz introspector
For issues and ideas: https://github.com/ossf/fuzz-introspector/issues

Project functions overview

The following table shows data about each function in the project. The functions included in this table correspond to all functions that exist in the executables of the fuzzers. As such, there may be functions that are from third-party libraries.

For further technical details on the meaning of columns in the below table, please see the Glossary .

Func name Functions filename Args Function call depth Reached by Fuzzers Runtime reached by Fuzzers Combined reached by Fuzzers Fuzzers runtime hit Func lines hit % I Count BB Count Cyclomatic complexity Functions reached Reached by functions Accumulated cyclomatic complexity Undiscovered complexity

Fuzzer details

Fuzzer: fuzzers/fuzz_inference.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_load_model.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_structured.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_apply_template.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_tokenizer.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_grammar.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_json_to_grammar.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_structurally_created.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Files and Directories in report

This section shows which files and directories are considered in this report. The main reason for showing this is fuzz introspector may include more code in the reasoning than is desired. This section helps identify if too many files/directories are included, e.g. third party code, which may be irrelevant for the threat model. In the event too much is included, fuzz introspector supports a configuration file that can exclude data from the report. See the following link for more information on how to create a config file: link

Files in report

Source file Reached by Covered by
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h [] []
/src/llama.cpp/src/llama-memory.cpp [] []
/src/llama.cpp/tests/test-json-partial.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/kleidiai/kleidiai.cpp [] []
/src/llama.cpp/src/llama-kv-cache.cpp [] []
/src/llama.cpp/ggml/src/ggml-opt.cpp [] []
/src/llama.cpp/ggml/src/ggml-webgpu/ggml-webgpu.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/element_wise.hpp [] []
/src/llama.cpp/examples/gen-docs/gen-docs.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/set_rows.cpp [] []
/src/llama.cpp/tests/test-rope.cpp [] []
/src/llama.cpp/tools/run/run.cpp ['fuzzers/fuzz_json_to_grammar.cpp'] []
/src/llama.cpp/pocs/vdot/vdot.cpp [] []
/src/llama.cpp/common/log.cpp [] []
/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp ['fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/gguf.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/common/json-schema-to-grammar.cpp ['fuzzers/fuzz_json_to_grammar.cpp'] []
/src/llama.cpp/tools/export-lora/export-lora.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/common/common.h [] []
/src/llama.cpp/src/llama-kv-cells.h ['fuzzers/fuzz_inference.cpp'] []
/src/llama.cpp/src/llama-memory-hybrid.h [] []
/src/llama.cpp/common/regex-partial.h [] []
/src/llama.cpp/common/console.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/hbm.cpp [] []
/src/llama.cpp/examples/diffusion/diffusion-cli.cpp [] []
/src/llama.cpp/fuzzers/fuzz_structured.cpp ['fuzzers/fuzz_structured.cpp'] []
/src/llama.cpp/src/llama-cparams.cpp [] []
/src/llama.cpp/common/http.h [] []
/src/llama.cpp/src/llama-adapter.cpp [] []
/src/llama.cpp/pocs/vdot/q8dot.cpp [] []
/src/llama.cpp/tools/tokenize/tokenize.cpp [] []
/src/llama.cpp/tests/test-chat.cpp [] []
/src/llama.cpp/tests/test-chat-template.cpp [] []
/src/llama.cpp/tests/test-gguf.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/common.cpp ['fuzzers/fuzz_inference.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/amx/common.h [] []
/src/llama.cpp/ggml/src/ggml-cuda/vendors/hip.h [] []
/src/llama.cpp/tools/server/utils.hpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/tests/test-quantize-perf.cpp [] []
/src/llama.cpp/tests/test-grammar-parser.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/amx/amx.cpp [] []
/src/llama.cpp/ggml/src/ggml-cann/acl_tensor.h [] []
/src/llama.cpp/include/llama-cpp.h [] []
/src/llama.cpp/fuzzers/fuzz_apply_template.cpp ['fuzzers/fuzz_apply_template.cpp'] []
/src/llama.cpp/ggml/src/ggml-backend-impl.h [] []
/src/llama.cpp/tests/test-opt.cpp [] []
/src/llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp [] []
/src/llama.cpp/src/llama-graph.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/element_wise.cpp [] []
/src/llama.cpp/vendor/cpp-httplib/httplib.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/quants.c [] []
/src/llama.cpp/src/llama-context.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_tokenizer.cpp'] []
/src/llama.cpp/common/regex-partial.cpp [] []
/src/llama.cpp/tools/mtmd/mtmd-cli.cpp [] []
/src/llama.cpp/ggml/src/ggml-impl.h ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/src/llama-impl.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/wkv.cpp [] []
/src/llama.cpp/tools/cvector-generator/mean.hpp [] []
/src/llama.cpp/tools/server/server.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_tokenizer.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/tools/main/main.cpp [] []
/src/llama.cpp/src/llama-batch.cpp ['fuzzers/fuzz_inference.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/quantize.hpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/concat.cpp [] []
/src/llama.cpp/ggml/src/ggml-cann/common.h [] []
/src/llama.cpp/src/llama-mmap.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [] []
/src/llama.cpp/src/llama-adapter.h [] []
/src/llama.cpp/src/unicode.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/dmmv.cpp [] []
/src/llama.cpp/ggml/src/ggml-backend-reg.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/src/llama-memory-recurrent.cpp [] []
/src/llama.cpp/tools/cvector-generator/pca.hpp [] []
/src/llama.cpp/tests/test-chat-parser.cpp [] []
/src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp [] []
/src/llama.cpp/ggml/src/ggml-metal/ggml-metal-ops.cpp [] []
/src/llama.cpp/tests/test-backend-ops.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp ['fuzzers/fuzz_tokenizer.cpp'] []
/src/llama.cpp/vendor/nlohmann/json.hpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_tokenizer.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/common/ngram-cache.h [] []
/src/llama.cpp/vendor/minja/chat-template.hpp [] []
/src/llama.cpp/tools/mtmd/clip.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/src/llama-chat.cpp ['fuzzers/fuzz_apply_template.cpp'] []
/src/llama.cpp/common/chat-parser.h [] []
/src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/binbcast.hpp [] []
/src/llama.cpp/common/chat.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/conv.cpp [] []
/src/llama.cpp/src/llama-io.h [] []
/src/llama.cpp/tests/test-grammar-integration.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_json_to_grammar.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/rope.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/amx/mmq.cpp [] []
/src/llama.cpp/vendor/stb/stb_image.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/llamafile/sgemm.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/tests/test-tokenizer-0.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/quants.hpp [] []
/src/llama.cpp/ggml/src/ggml-zdnn/mmf.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/arm/quants.c [] []
/src/llama.cpp/include/llama.h [] []
/src/llama.cpp/common/llguidance.cpp [] []
/src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp [] []
/src/llama.cpp/tools/mtmd/mtmd-audio.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/gemm.hpp [] []
/src/llama.cpp/src/llama-kv-cache-iswa.cpp [] []
/src/llama.cpp/vendor/miniaudio/miniaudio.h ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/im2col.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/cpy.hpp [] []
/src/llama.cpp/src/llama-arch.h [] []
/src/llama.cpp/ggml/src/ggml.c ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_tokenizer.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/tools/perplexity/perplexity.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/getrows.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/convert.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/ops.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/dequantize.hpp [] []
/src/llama.cpp/src/llama-grammar.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/arm/repack.cpp [] []
/src/llama.cpp/tests/test-sampling.cpp ['fuzzers/fuzz_json_to_grammar.cpp'] []
/src/llama.cpp/src/llama-kv-cache.h [] []
/src/llama.cpp/src/llama-memory-hybrid.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/vec.cpp [] []
/src/llama.cpp/src/llama-mmap.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/vendor/minja/minja.hpp [] []
/src/llama.cpp/tools/llama-bench/llama-bench.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/arm/cpu-feats.cpp [] []
/src/llama.cpp/tools/cvector-generator/cvector-generator.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/src/llama-model-saver.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/powerpc/cpu-feats.cpp [] []
/src/llama.cpp/src/llama-sampling.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/src/llama-memory.h [] []
/src/llama.cpp/ggml/src/ggml-cann/ggml-cann.cpp [] []
/src/llama.cpp/common/json-partial.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-common.h [] []
/src/llama.cpp/fuzzers/fuzz_inference.cpp ['fuzzers/fuzz_inference.cpp'] []
/src/llama.cpp/tools/run/linenoise.cpp/linenoise.h [] []
/src/llama.cpp/ggml/src/ggml-metal/ggml-metal-common.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/norm.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/binary-ops.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/cpy.cpp [] []
/src/llama.cpp/examples/gguf-hash/deps/sha256/sha256.c [] []
/src/llama.cpp/src/llama-quant.cpp [] []
/src/llama.cpp/examples/parallel/parallel.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/kleidiai/kernels.h [] []
/src/llama.cpp/src/llama.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/common/chat-parser.cpp [] []
/src/llama.cpp/ggml/include/ggml-cpp.h [] []
/src/llama.cpp/src/unicode.cpp ['fuzzers/fuzz_inference.cpp'] []
/src/llama.cpp/src/llama-impl.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/common.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/unary-ops.cpp [] []
/src/llama.cpp/tests/test-regex-partial.cpp [] []
/src/llama.cpp/tools/mtmd/mtmd.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/repack.cpp [] []
/src/llama.cpp/tests/test-double-float.cpp [] []
/src/llama.cpp/ggml/src/ggml-quants.c ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-zdnn/utils.cpp [] []
/src/llama.cpp/src/llama-hparams.cpp [] []
/src/llama.cpp/tools/rpc/rpc-server.cpp [] []
/src/llama.cpp/src/llama-arch.cpp [] []
/src/llama.cpp/common/log.h [] []
/src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/mmvq.cpp [] []
/src/llama.cpp/common/sampling.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/mmq.cpp [] []
/src/llama.cpp/ggml/src/ggml-metal/ggml-metal-device.cpp [] []
/src/llama.cpp/ggml/src/ggml-cann/aclnn_ops.h [] []
/src/llama.cpp/src/llama-batch.h [] []
/src/llama.cpp/tools/mtmd/mtmd.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-alloc.c ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/softmax.cpp [] []
/src/llama.cpp/src/llama-model-loader.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/examples/gguf-hash/deps/rotate-bits/rotate-bits.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/binbcast.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/tsembd.cpp [] []
/src/llama.cpp/examples/gguf/gguf.cpp [] []
/src/llama.cpp/common/base64.hpp [] []
/src/llama.cpp/ggml/src/ggml-rpc/ggml-rpc.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/fuzzers/fuzz_grammar.cpp ['fuzzers/fuzz_grammar.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/gla.cpp [] []
/src/llama.cpp/ggml/src/ggml-cann/acl_tensor.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/vec.h ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/tests/test-json-schema-to-grammar.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/src/llama-vocab.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/spacemit/ime.cpp [] []
/src/llama.cpp/examples/eval-callback/eval-callback.cpp [] []
/src/llama.cpp/src/llama-memory-recurrent.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/include/ggml.h [] []
/src/llama.cpp/tests/test-grammar-llguidance.cpp [] []
/src/llama.cpp/ggml/src/ggml-cann/aclnn_ops.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/tools/imatrix/imatrix.cpp [] []
/src/llama.cpp/tests/get-model.cpp [] []
/src/llama.cpp/examples/deprecation-warning/deprecation-warning.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/outprod.cpp [] []
/src/llama.cpp/ggml/src/ggml-backend.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/spacemit/ime1_kernels.cpp [] []
/src/llama.cpp/vendor/nlohmann/json_fwd.hpp [] []
/src/llama.cpp/common/speculative.cpp [] []
/src/llama.cpp/examples/lookahead/lookahead.cpp [] []
/src/llama.cpp/tools/gguf-split/gguf-split.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/tools/mtmd/mtmd-helper.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/tests/test-alloc.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-blas/ggml-blas.cpp [] []
/src/llama.cpp/examples/gguf-hash/gguf-hash.cpp [] []
/src/llama.cpp/examples/gguf-hash/deps/xxhash/xxhash.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/simd-mappings.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/kleidiai/kernels.cpp [] []
/src/llama.cpp/src/llama-kv-cache-iswa.h [] []
/src/llama.cpp/src/llama-io.cpp [] []
/src/llama.cpp/common/common.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_tokenizer.cpp'] []
/src/llama.cpp/ggml/src/ggml.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/loongarch/quants.c [] []
/src/llama.cpp/ggml/src/ggml-threading.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/repack.cpp [] []
/src/llama.cpp/ggml/src/ggml-zdnn/ggml-zdnn.cpp [] []
/src/llama.cpp/tools/tts/tts.cpp [] []
/src/llama.cpp/common/chat.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/common.hpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/common/arg.cpp [] []
/src/llama.cpp/tools/mtmd/clip-impl.h ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/traits.cpp [] []
/src/llama.cpp/src/llama-model-loader.h [] []
/src/llama.cpp/src/llama-model.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/tests/test-quantize-stats.cpp [] []
/src/llama.cpp/ggml/src/ggml-zdnn/common.hpp [] []
/src/llama.cpp/src/llama-graph.cpp ['fuzzers/fuzz_inference.cpp'] []
/src/llama.cpp/examples/gguf-hash/deps/sha1/sha1.c [] []
/src/llama.cpp/ggml/src/ggml-metal/ggml-metal.cpp [] []
/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp ['fuzzers/fuzz_json_to_grammar.cpp'] []
/src/llama.cpp/common/arg.h [] []
/src/llama.cpp/examples/retrieval/retrieval.cpp [] []
/src/llama.cpp/fuzzers/fuzz_load_model.cpp ['fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp ['fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/tests/test-gbnf-validator.cpp [] []
/src/llama.cpp/tests/test-quantize-fns.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/vecdotq.hpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/quants.c [] []
/src/llama.cpp/tools/quantize/quantize.cpp [] []
/src/llama.cpp/common/ngram-cache.cpp [] []
/src/llama.cpp/examples/embedding/embedding.cpp [] []

Directories in report

Directory
/src/llama.cpp/vendor/stb/
/src/llama.cpp/tools/perplexity/
/src/llama.cpp/tools/llama-bench/
/src/llama.cpp/examples/llama.android/llama/src/main/cpp/
/src/llama.cpp/examples/gguf-hash/deps/xxhash/
/src/llama.cpp/ggml/src/ggml-vulkan/
/src/llama.cpp/tools/run//
/src/llama.cpp/ggml/src/ggml-zdnn/
/src/llama.cpp/ggml/src/ggml-sycl/
/src/llama.cpp/examples/gguf-hash/
/src/llama.cpp/tools/quantize/
/src/llama.cpp/examples/embedding/
/src/llama.cpp/ggml/src/ggml-cpu/
/src/llama.cpp/examples/lookahead/
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/
/src/llama.cpp/examples/eval-callback/
/src/llama.cpp/examples/diffusion/
/src/llama.cpp/ggml/src/ggml-cpu/llamafile/
/src/llama.cpp/vendor/miniaudio/
/src/llama.cpp/tools/export-lora/
/src/llama.cpp/ggml/src/ggml-sycl/dpct/
/src/llama.cpp/examples/deprecation-warning/
/src/llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/
/src/llama.cpp/tools/rpc/
/src/llama.cpp/examples/gguf/
/src/llama.cpp/vendor/minja/
/src/llama.cpp/tools/gguf-split/
/src/llama.cpp/common/
/src/llama.cpp/examples/retrieval/
/src/llama.cpp/ggml/include/
/src/llama.cpp/examples/gguf-hash/deps/sha1/
/src/llama.cpp/ggml/src/ggml-cpu/spacemit/
/src/llama.cpp/ggml/src/ggml-cpu/arch/loongarch/
/src/llama.cpp/src/
/src/llama.cpp/vendor/cpp-httplib/
/src/llama.cpp/tools/run/linenoise.cpp/
/src/llama.cpp/examples/convert-llama2c-to-ggml/
/src/llama.cpp/examples/gguf-hash/deps/sha256/
/src/llama.cpp/ggml/src/ggml-cpu/amx/
/src/llama.cpp/pocs/vdot/
/src/llama.cpp/ggml/src/ggml-cpu/kleidiai/
/src/llama.cpp/include/
/src/llama.cpp/ggml/src/ggml-cpu/arch/arm/
/src/llama.cpp/tools/imatrix/
/src/llama.cpp/ggml/src/ggml-metal/
/src/llama.cpp/vendor/nlohmann/
/src/llama.cpp/tools/mtmd/
/src/llama.cpp/ggml/src/ggml-cann/
/src/llama.cpp/ggml/src/ggml-cuda/vendors/
/src/llama.cpp/examples/parallel/
/src/llama.cpp/tools/run/
/src/llama.cpp/ggml/src/
/src/llama.cpp/fuzzers/
/src/llama.cpp/tools/server/
/src/llama.cpp/ggml/src/ggml-webgpu/
/src/llama.cpp/ggml/src/ggml-cpu/arch/powerpc/
/src/llama.cpp/ggml/src/ggml-opencl/
/src/llama.cpp/tools/tts/
/src//src/
/src/llama.cpp/tools/main/
/src/llama.cpp/ggml/src/ggml-rpc/
/src/llama.cpp/examples/gen-docs/
/src/llama.cpp/tools/cvector-generator/
/src/llama.cpp/tools/tokenize/
/src/llama.cpp/ggml/src/ggml-blas/
/src/llama.cpp/examples/gguf-hash/deps/rotate-bits/
/src/llama.cpp/tests/

Sink analyser for CWEs

This section contains multiple tables, each table contains a list of sink functions/methods found in the project for one of the CWE supported by the sink analyser, together with information like which fuzzers statically reach the sink functions/methods and possible call path to that sink functions/methods if it is not statically reached by any fuzzers. Column 1 is the function/method name of the sink functions/methods found in the project. Column 2 lists all fuzzers (or no fuzzers at all) that have covered that particular function method statically. Column 3 shows a list of possible call paths to reach the specific function/method call if none of the fuzzers cover the target function/method calls. Lastly, column 4 shows possible fuzzer blockers that prevent an existing fuzzer from reaching the target sink functions/methods dynamically.

Sink functions/methods found for CWE787

Target sink Reached by fuzzer Function call path Possible branch blockers
realloc ['/src/llama.cpp/fuzzers/fuzz_structured.cpp', '/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp', '/src/llama.cpp/fuzzers/fuzz_load_model.cpp', '/src/llama.cpp/fuzzers/fuzz_inference.cpp', '/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp'] N/A
Blocker function Arguments type Return type Constants touched
linenoiseAddCompletion
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1020
['linenoiseCompletions*', 'char*'] void []
linenoise
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1832
['char*'] char []
alloc_compute_meta
in /src/llama.cpp/tools/mtmd/clip.cpp:2798
['clip_ctx'] void []
LlamaData::init
in /src/llama.cpp/tools/run/run.cpp:774
['Opt'] int []
&operator[](size_tindex)
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2857
['size_t'] typename std::enable_if ::type []
ggml_backend_vk_device_init
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12528
['ggml_backend_dev_t', 'char*'] ggml_backend_t []
ggml_backend_vk_reg_get_device_count
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12940
['ggml_backend_reg_t'] size_t []
ggml_backend_vk_reg_get_device
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12945
['ggml_backend_reg_t', 'size_t'] ggml_backend_dev_t []
ggml_backend_vk_device_get_buffer_type
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12496
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_vk_set_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11830
['ggml_backend_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_vk_get_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11853
['ggml_backend_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_vk_cpy_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11876
['ggml_backend_t', 'ggml_tensor*', 'ggml_tensor*'] bool []
ggml_backend_vk_device_get_host_buffer_type
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12501
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_registry
in /src/llama.cpp/ggml/src/ggml-backend-reg.cpp:180
[] void []
ggml_backend_vk_device_supports_op
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12534
['ggml_backend_dev_t', 'ggml_tensor*'] bool []
llama_model::load_tensors
in /src/llama.cpp/src/llama-model.cpp:2100
['llama_model_loader'] bool []
llama_context::decode
in /src/llama.cpp/src/llama-context.cpp:958
['llama_batch'] int []
llama_context::opt_epoch
in /src/llama.cpp/src/llama-context.cpp:2214
['ggml_opt_dataset_t', 'ggml_opt_result_t', 'ggml_opt_result_t', 'int64_t', 'ggml_opt_epoch_callback', 'ggml_opt_epoch_callback'] void []
Java_android_llama_cpp_LLamaAndroid_new_1context
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:109
['JNIEnv*', 'jlong'] JNIEXPORT []
mtmd_cli_context
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:89
['common_params'] void []
load_model
in /src/llama.cpp/tools/server/server.cpp:2380
['common_params'] bool []
clip_encode_float_image
in /src/llama.cpp/tools/mtmd/clip.cpp:4437
['struct clip_ctx*', 'int', 'float*', 'int', 'int', 'float*'] bool []
eval_message
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:199
['mtmd_cli_context', 'common_chat_msg'] int []
update_slots
in /src/llama.cpp/tools/server/server.cpp:3534
[] void []
test_backend
in /src/llama.cpp/tests/test-opt.cpp:842
['ggml_backend_sched_t', 'ggml_backend_t', 'enum ggml_opt_optimizer_type'] std::pair []
llama_kv_cache::update
in /src/llama.cpp/src/llama-kv-cache.cpp:596
['llama_context*', 'bool', 'stream_copy_info'] bool []
test_multiple_buffer_types
in /src/llama.cpp/tests/test-alloc.cpp:457
[] void []
Java_android_llama_cpp_LLamaAndroid_load_1model
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:83
['JNIEnv*', 'jstring'] JNIEXPORT []
llama_model_load_from_splits
in /src/llama.cpp/src/llama.cpp:306
['char**', 'size_t', 'struct llama_model_params'] struct llama_model []
common_opt_dataset_init
in /src/llama.cpp/common/common.cpp:1562
['struct llama_context*', 'std::vector ', 'int64_t'] ggml_opt_dataset_t []
llama_context::opt_init
in /src/llama.cpp/src/llama-context.cpp:2065
['struct llama_model*', 'struct llama_opt_params'] void []
minja::BinaryOpExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1310
['std::shared_ptr '] Value []
ggml_backend_sycl_graph_compute
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3986
['ggml_backend_t', 'ggml_cgraph*'] ggml_status []
eval_grad
in /src/llama.cpp/tests/test-backend-ops.cpp:1457
['ggml_backend_t', 'char*', 'printer*'] bool []
test_roundtrip
in /src/llama.cpp/tests/test-gguf.cpp:1073
['ggml_backend_dev_t', 'unsigned int', 'bool'] std::pair []
test_gguf_set_kv
in /src/llama.cpp/tests/test-gguf.cpp:1203
['ggml_backend_dev_t', 'unsigned int'] std::pair []
run_merge
in /src/llama.cpp/tools/export-lora/export-lora.cpp:186
[] void []
PCA::pca_model
in /src/llama.cpp/tools/cvector-generator/pca.hpp:63
['struct ggml_tensor*'] void []
llama_model::create_memory
in /src/llama.cpp/src/llama-model.cpp:19368
['llama_memory_params', 'llama_cparams'] llama_memory_i []
llama_adapter_cvec::apply
in /src/llama.cpp/src/llama-adapter.cpp:94
['llama_model', 'float*', 'size_t', 'int32_t', 'int32_t', 'int32_t'] bool []
ggml_backend_cann_buffer_set_tensor
in /src/llama.cpp/ggml/src/ggml-cann/ggml-cann.cpp:1241
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []

Sink functions/methods found for CWE416

Target sink Reached by fuzzer Function call path Possible branch blockers
get ['/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp', '/src/llama.cpp/fuzzers/fuzz_structured.cpp', '/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp', '/src/llama.cpp/fuzzers/fuzz_load_model.cpp', '/src/llama.cpp/fuzzers/fuzz_inference.cpp', '/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp'] N/A
Blocker function Arguments type Return type Constants touched
minja::BinaryOpExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1310
['std::shared_ptr '] Value []
ggml_backend_sycl_graph_compute
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3986
['ggml_backend_t', 'ggml_cgraph*'] ggml_status []
eval_grad
in /src/llama.cpp/tests/test-backend-ops.cpp:1457
['ggml_backend_t', 'char*', 'printer*'] bool []
mtmd_cli_context
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:89
['common_params'] void []
load_media
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:153
['std::string'] bool []
clip_graph
in /src/llama.cpp/tools/mtmd/clip.cpp:468
['clip_ctx*', 'clip_image_f32'] void []
clip_model_loader
in /src/llama.cpp/tools/mtmd/clip.cpp:2148
['char*'] void []
LlamaData::init
in /src/llama.cpp/tools/run/run.cpp:774
['Opt'] int []
&operator[](size_tindex)
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2857
['size_t'] typename std::enable_if ::type []
ggml_backend_vk_device_init
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12528
['ggml_backend_dev_t', 'char*'] ggml_backend_t []
ggml_backend_vk_reg_get_device_count
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12940
['ggml_backend_reg_t'] size_t []
ggml_backend_vk_reg_get_device
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12945
['ggml_backend_reg_t', 'size_t'] ggml_backend_dev_t []
ggml_backend_vk_device_get_buffer_type
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12496
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_vk_set_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11830
['ggml_backend_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_vk_get_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11853
['ggml_backend_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_vk_cpy_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11876
['ggml_backend_t', 'ggml_tensor*', 'ggml_tensor*'] bool []
ggml_backend_vk_device_get_host_buffer_type
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12501
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_registry
in /src/llama.cpp/ggml/src/ggml-backend-reg.cpp:180
[] void []
ggml_backend_vk_device_supports_op
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12534
['ggml_backend_dev_t', 'ggml_tensor*'] bool []
llama_model::load_tensors
in /src/llama.cpp/src/llama-model.cpp:2100
['llama_model_loader'] bool []
llama_context::decode
in /src/llama.cpp/src/llama-context.cpp:958
['llama_batch'] int []
llama_context::opt_epoch
in /src/llama.cpp/src/llama-context.cpp:2214
['ggml_opt_dataset_t', 'ggml_opt_result_t', 'ggml_opt_result_t', 'int64_t', 'ggml_opt_epoch_callback', 'ggml_opt_epoch_callback'] void []
load_model
in /src/llama.cpp/tools/server/server.cpp:2380
['common_params'] bool []
Java_android_llama_cpp_LLamaAndroid_load_1model
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:83
['JNIEnv*', 'jstring'] JNIEXPORT []
llama_model_load_from_splits
in /src/llama.cpp/src/llama.cpp:306
['char**', 'size_t', 'struct llama_model_params'] struct llama_model []
clip_image_f32_get_img
in /src/llama.cpp/tools/mtmd/clip.cpp:2996
['struct clip_image_f32_batch*', 'int'] clip_image_f32 []
clip_encode_float_image
in /src/llama.cpp/tools/mtmd/clip.cpp:4437
['struct clip_ctx*', 'int', 'float*', 'int', 'int', 'float*'] bool []
eval_message
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:199
['mtmd_cli_context', 'common_chat_msg'] int []
update_slots
in /src/llama.cpp/tools/server/server.cpp:3534
[] void []
mtmd::nx
in /src/llama.cpp/tools/mtmd/mtmd.h:259
[] uint32_t []
mtmd::ny
in /src/llama.cpp/tools/mtmd/mtmd.h:260
[] uint32_t []
mtmd::data
in /src/llama.cpp/tools/mtmd/mtmd.h:261
[] unsigned char []
mtmd::n_bytes
in /src/llama.cpp/tools/mtmd/mtmd.h:262
[] size_t []
mtmd::id
in /src/llama.cpp/tools/mtmd/mtmd.h:263
[] std::string []
mtmd::set_id
in /src/llama.cpp/tools/mtmd/mtmd.h:264
['char*'] void []
mtmd::c_ptr
in /src/llama.cpp/tools/mtmd/mtmd.h:274
[] std::vector []
*operator[](size_tidx)
in /src/llama.cpp/tools/mtmd/mtmd.h:289
['size_t'] mtmd_input_chunk []
add_media
in /src/llama.cpp/tools/mtmd/mtmd.cpp:480
['mtmd_bitmap*'] int32_t []
chat_loop
in /src/llama.cpp/tools/run/run.cpp:1323
['LlamaData', 'Opt'] int []
get_tts_version
in /src/llama.cpp/tools/tts/tts.cpp:477
['llama_model*'] outetts_version []
audio_text_from_speaker
in /src/llama.cpp/tools/tts/tts.cpp:499
['json'] std::string []
audio_data_from_speaker
in /src/llama.cpp/tools/tts/tts.cpp:512
['json'] std::string []
params_from_json_cmpl
in /src/llama.cpp/tools/server/server.cpp:295
['llama_context*', 'common_params', 'json'] slot_params []
process_single_task
in /src/llama.cpp/tools/server/server.cpp:3314
['server_task'] void []
validate
in /src/llama.cpp/tools/server/utils.hpp:1324
['struct llama_context*'] bool []
tokenize_input_prompts
in /src/llama.cpp/tools/server/utils.hpp:1480
['llama_vocab*', 'mtmd_context*', 'json', 'bool', 'bool'] std::vector []
format_rerank
in /src/llama.cpp/tools/server/utils.hpp:1497
['struct llama_model*', 'struct llama_vocab*', 'mtmd_context*', 'std::string', 'std::string'] server_tokens []
ggml_sycl_op_mul_mat_sycl
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:1929
['ggml_backend_sycl_context', 'ggml_tensor*', 'ggml_tensor*', 'ggml_tensor*', 'char*', 'float*', 'char*', 'float*', 'int64_t', 'int64_t', 'int64_t', 'int64_t', 'queue_ptr'] void []
dpct::detail::gemm_impl
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:1738
['sycl::queue', 'oneapi::math::transpose', 'oneapi::math::transpose', 'int', 'int', 'int', 'void*', 'void*', 'int', 'void*', 'int', 'void*', 'void*', 'int'] void []
operator[](size_tindex)const
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2745
['size_t'] pointer_t []
ggml_backend_zdnn_buffer_free_buffer
in /src/llama.cpp/ggml/src/ggml-zdnn/ggml-zdnn.cpp:199
['ggml_backend_buffer_t'] void []
ggml_backend_blas_graph_compute
in /src/llama.cpp/ggml/src/ggml-blas/ggml-blas.cpp:227
['ggml_backend_t', 'struct ggml_cgraph*'] enum ggml_status []
ggml_backend_cann_buffer_set_tensor
in /src/llama.cpp/ggml/src/ggml-cann/ggml-cann.cpp:1241
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
llama_model::memory_breakdown
in /src/llama.cpp/src/llama-model.cpp:6190
[] std::map []
llama_model::create_memory
in /src/llama.cpp/src/llama-model.cpp:19368
['llama_memory_params', 'llama_cparams'] llama_memory_i []
llama_memory_recurrent::state_read
in /src/llama.cpp/src/llama-memory-recurrent.cpp:736
['llama_io_read_i', 'llama_seq_id', 'llama_state_seq_flags'] void []
llama_memory_recurrent::memory_breakdown
in /src/llama.cpp/src/llama-memory-recurrent.cpp:365
[] std::map []
llama_memory_recurrent::total_size
in /src/llama.cpp/src/llama-memory-recurrent.cpp:663
[] size_t []
llama_kv_cache_iswa::get_base
in /src/llama.cpp/src/llama-kv-cache-iswa.cpp:238
[] llama_kv_cache []
llama_kv_cache_iswa::get_swa
in /src/llama.cpp/src/llama-kv-cache-iswa.cpp:242
[] llama_kv_cache []
llama_kv_cache_iswa_context::get_base
in /src/llama.cpp/src/llama-kv-cache-iswa.cpp:316
[] llama_kv_cache_context []
llama_kv_cache_iswa_context::get_swa
in /src/llama.cpp/src/llama-kv-cache-iswa.cpp:322
[] llama_kv_cache_context []
llama_model_quantize
in /src/llama.cpp/src/llama-quant.cpp:1075
['char*', 'char*', 'llama_model_quantize_params*'] uint32_t []
Java_android_llama_cpp_LLamaAndroid_new_1context
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:109
['JNIEnv*', 'jlong'] JNIEXPORT []
llama_kv_cache::init_batch
in /src/llama.cpp/src/llama-kv-cache.cpp:481
['llama_batch_allocr', 'uint32_t', 'bool'] llama_memory_context_ptr []
llama_kv_cache::state_read
in /src/llama.cpp/src/llama-kv-cache.cpp:1497
['llama_io_read_i', 'llama_seq_id', 'llama_state_seq_flags'] void []
llama_context::get_sched
in /src/llama.cpp/src/llama-context.cpp:443
[] ggml_backend_sched_t []
llama_context::get_memory
in /src/llama.cpp/src/llama-context.cpp:475
[] llama_memory_t []
llama_context::set_abort_callback
in /src/llama.cpp/src/llama-context.cpp:661
['void*'] void []
llama_context::state_set_data
in /src/llama.cpp/src/llama-context.cpp:1661
['uint8_t*', 'size_t'] size_t []
llama_context::state_load_file
in /src/llama.cpp/src/llama-context.cpp:1701
['char*', 'llama_token*', 'size_t', 'size_t*'] bool []
llama_context::get_gf_res_reserve
in /src/llama.cpp/src/llama-context.cpp:1368
[] llm_graph_result []
llama_context::opt_init
in /src/llama.cpp/src/llama-context.cpp:2065
['struct llama_model*', 'struct llama_opt_params'] void []
llama_kv_cache::memory_breakdown
in /src/llama.cpp/src/llama-kv-cache.cpp:473
[] std::map []
llama_kv_cache::total_size
in /src/llama.cpp/src/llama-kv-cache.cpp:1298
[] size_t []
llama_model_loader::llama_model_loader
in /src/llama.cpp/src/llama-model-loader.cpp:471
['std::string', 'std::vector ', 'bool', 'bool', 'llama_model_kv_override*', 'llama_model_tensor_buft_override*'] void []
llama_model_loader::load_all_data
in /src/llama.cpp/src/llama-model-loader.cpp:922
['struct ggml_context*', 'llama_buf_map', 'llama_mlocks*', 'llama_progress_callback', 'void*'] bool []
llama_vocab::impl::tokenize( conststd::string&raw_text, booladd_special, boolparse_special)const
in /src/llama.cpp/src/llama-vocab.cpp:2784
['std::string', 'bool', 'bool'] std::vector []
llama_memory_hybrid::get_mem_attn
in /src/llama.cpp/src/llama-memory-hybrid.cpp:193
[] llama_kv_cache []
llama_memory_hybrid::get_mem_recr
in /src/llama.cpp/src/llama-memory-hybrid.cpp:197
[] llama_memory_recurrent []
llama_memory_hybrid_context::get_attn
in /src/llama.cpp/src/llama-memory-hybrid.cpp:262
[] llama_kv_cache_context []
llama_memory_hybrid_context::get_recr
in /src/llama.cpp/src/llama-memory-hybrid.cpp:266
[] llama_memory_recurrent_context []
llm_graph_input_mem_hybrid::get_attn
in /src/llama.cpp/src/llama-graph.h:376
[] llm_graph_input_attn_kv []
llm_graph_input_mem_hybrid::get_recr
in /src/llama.cpp/src/llama-graph.h:377
[] llm_graph_input_rs []
llm_graph_result::get_ctx
in /src/llama.cpp/src/llama-graph.h:478
[] ggml_context []
common_params_parse
in /src/llama.cpp/common/arg.cpp:1629
['int', 'char**', 'common_params', 'llama_example'] bool []
test_failure_left_recursion
in /src/llama.cpp/tests/test-grammar-integration.cpp:702
[] void []
test_template_output_parsers
in /src/llama.cpp/tests/test-chat.cpp:555
[] void []
oaicompat_chat_params_parse
in /src/llama.cpp/tools/server/utils.hpp:534
['json', 'oaicompat_parser_options', 'std::vector '] json []
common_chat_format_single
in /src/llama.cpp/common/chat.cpp:458
['struct common_chat_templates*', 'std::vector ', 'common_chat_msg', 'bool', 'bool'] std::string []
format_input_text
in /src/llama.cpp/examples/diffusion/diffusion-cli.cpp:513
['std::string', 'std::string', 'bool', 'llama_model*'] std::string []
export_md
in /src/llama.cpp/examples/gen-docs/gen-docs.cpp:50
['std::string', 'llama_example'] void []
*operator->()
in /src/llama.cpp/vendor/cpp-httplib/httplib.h:1353
[] Response []
lexer::scan_string
in /src/llama.cpp/vendor/nlohmann/json.hpp:7277
[] token_type []
lexer::scan_comment
in /src/llama.cpp/vendor/nlohmann/json.hpp:7867
[] bool []
lexer::scan_number
in /src/llama.cpp/vendor/nlohmann/json.hpp:7992
[] token_type []
parser::parser
in /src/llama.cpp/vendor/nlohmann/json.hpp:12929
['InputAdapterType'] void []
parser::parse
in /src/llama.cpp/vendor/nlohmann/json.hpp:12951
['bool', 'BasicJsonType'] void []
(3) boolsax_parse(constinput_format_tformat, json_sax_t*sax_, constboolstrict=true, constcbor_tag_handler_ttag_handler=cbor_tag_handler_t::error)
in /src/llama.cpp/vendor/nlohmann/json.hpp:9882
['input_format_t', 'json_sax_t*'] JSON_HEDLEY_NON_NULL []
test_json_healing
in /src/llama.cpp/tests/test-json-partial.cpp:16
[] void []
ggml_backend_rpc_start_server
in /src/llama.cpp/ggml/src/ggml-rpc/ggml-rpc.cpp:1714
['char*', 'char*', 'size_t', 'size_t', 'ggml_backend_dev_t*', 'size_t*', 'size_t*'] void []
common_chat_msg_parser::consume_json
in /src/llama.cpp/common/chat-parser.cpp:392
[] common_json []
test_json_with_dumped_args
in /src/llama.cpp/tests/test-chat-parser.cpp:429
[] void []
common_chat_msg_parser::consume_json_with_dumped_args
in /src/llama.cpp/common/chat-parser.cpp:399
['std::vector >', 'std::vector >'] common_chat_msg_parser::consume_json_result []
parser::accept
in /src/llama.cpp/vendor/nlohmann/json.hpp:13011
[] bool []
from_cbor
in /src/llama.cpp/vendor/nlohmann/json.hpp:24465
['detail::span_input_adapter'] basic_json []
from_msgpack
in /src/llama.cpp/vendor/nlohmann/json.hpp:24520
['detail::span_input_adapter'] basic_json []
from_ubjson
in /src/llama.cpp/vendor/nlohmann/json.hpp:24574
['detail::span_input_adapter'] basic_json []
from_bson
in /src/llama.cpp/vendor/nlohmann/json.hpp:24658
['detail::span_input_adapter'] basic_json []
minja::chat_template::chat_template
in /src/llama.cpp/vendor/minja/chat-template.hpp:109
['std::string', 'std::string', 'std::string'] void []
operator<(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:341
['Value'] bool []
operator>(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:350
['Value'] bool []
operator==(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:359
['Value'] bool []
&at(constValue&index)
in /src/llama.cpp/vendor/minja/minja.hpp:419
['Value'] Value []
operator-()const
in /src/llama.cpp/vendor/minja/minja.hpp:454
[] Value []
operator+(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:468
['Value'] Value []
operator-(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:482
['Value'] Value []
operator*(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:488
['Value'] Value []
operator/(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:501
['Value'] Value []
operator%(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:507
['Value'] Value []
Value::get ()const
in /src/llama.cpp/vendor/minja/minja.hpp:544
[] json []
operator()(constminja::Value&v)const
in /src/llama.cpp/vendor/minja/minja.hpp:578
['minja::Value'] size_t []
minja::SetNode::do_render
in /src/llama.cpp/vendor/minja/minja.hpp:1113
['std::shared_ptr '] void []
minja::SubscriptExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1218
['std::shared_ptr '] Value []
minja::MethodCallExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1470
['std::shared_ptr '] Value []
minja::FilterExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1619
['std::shared_ptr '] Value []
free ['/src/llama.cpp/fuzzers/fuzz_structured.cpp', '/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp', '/src/llama.cpp/fuzzers/fuzz_load_model.cpp', '/src/llama.cpp/fuzzers/fuzz_inference.cpp', '/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp'] N/A
Blocker function Arguments type Return type Constants touched
clip_log_internal
in /src/llama.cpp/tools/mtmd/clip-impl.h:231
['enum ggml_log_level', 'char*'] void []
ingest_args
in /src/llama.cpp/tools/tokenize/tokenize.cpp:80
['int', 'char**'] std::vector []
write_utf8_cstr_to_stdout
in /src/llama.cpp/tools/tokenize/tokenize.cpp:132
['char*', 'bool'] void []
linenoise
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1832
['char*'] char []
linenoiseFree
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1861
['void*'] void []
linenoiseAtExit
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1881
[] void []
chat_loop
in /src/llama.cpp/tools/run/run.cpp:1323
['LlamaData', 'Opt'] int []
linenoiseHistoryLoad
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1977
['char*'] int []
linenoiseHistorySetMaxLen
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1926
['int'] int []
~linenoiseCompletions()
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.h:78
[] void []
~train_context()
in /src/llama.cpp/tools/cvector-generator/cvector-generator.cpp:262
[] void []
minja::BinaryOpExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1310
['std::shared_ptr '] Value []
ggml_backend_sycl_graph_compute
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3986
['ggml_backend_t', 'ggml_cgraph*'] ggml_status []
eval_grad
in /src/llama.cpp/tests/test-backend-ops.cpp:1457
['ggml_backend_t', 'char*', 'printer*'] bool []
ggml_vk_test_dequant
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:10178
['ggml_backend_vk_context*', 'size_t', 'ggml_type'] void []
ggml_backend_vk_graph_compute
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12016
['ggml_backend_t', 'ggml_cgraph*'] ggml_status []
llama_model_quantize
in /src/llama.cpp/src/llama-quant.cpp:1075
['char*', 'char*', 'llama_model_quantize_params*'] uint32_t []
Java_android_llama_cpp_LLamaAndroid_backend_1free
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:147
[] JNIEXPORT []
PCA::run_pca
in /src/llama.cpp/tools/cvector-generator/pca.hpp:296
['struct pca_params', 'std::vector ', 'std::vector '] void []
ggml_backend_tensor_copy_async
in /src/llama.cpp/ggml/src/ggml-backend.cpp:407
['ggml_backend_t', 'ggml_backend_t', 'struct ggml_tensor*', 'struct ggml_tensor*'] void []
clip_encode_float_image
in /src/llama.cpp/tools/mtmd/clip.cpp:4437
['struct clip_ctx*', 'int', 'float*', 'int', 'int', 'float*'] bool []
eval_message
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:199
['mtmd_cli_context', 'common_chat_msg'] int []
update_slots
in /src/llama.cpp/tools/server/server.cpp:3534
[] void []
test_backend
in /src/llama.cpp/tests/test-opt.cpp:842
['ggml_backend_sched_t', 'ggml_backend_t', 'enum ggml_opt_optimizer_type'] std::pair []
llama_context::opt_epoch
in /src/llama.cpp/src/llama-context.cpp:2214
['ggml_opt_dataset_t', 'ggml_opt_result_t', 'ggml_opt_result_t', 'int64_t', 'ggml_opt_epoch_callback', 'ggml_opt_epoch_callback'] void []
llama_context::decode
in /src/llama.cpp/src/llama-context.cpp:958
['llama_batch'] int []
llama_kv_cache::update
in /src/llama.cpp/src/llama-kv-cache.cpp:596
['llama_context*', 'bool', 'stream_copy_info'] bool []
ggml_backend_multi_buffer_free_buffer
in /src/llama.cpp/ggml/src/ggml-backend.cpp:580
['ggml_backend_buffer_t'] void []
operator()(ggml_backend_sched_tsched)
in /src/llama.cpp/ggml/include/ggml-cpp.h:34
['ggml_backend_sched_t'] void []
test_multiple_buffer_types
in /src/llama.cpp/tests/test-alloc.cpp:457
[] void []
test_buffer_size_zero
in /src/llama.cpp/tests/test-alloc.cpp:525
[] void []
clip_ctx
in /src/llama.cpp/tools/mtmd/clip.cpp:395
['clip_context_params'] void []
LlamaData::init
in /src/llama.cpp/tools/run/run.cpp:774
['Opt'] int []
&operator[](size_tindex)
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2857
['size_t'] typename std::enable_if ::type []
ggml_backend_vk_device_init
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12528
['ggml_backend_dev_t', 'char*'] ggml_backend_t []
ggml_backend_vk_reg_get_device_count
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12940
['ggml_backend_reg_t'] size_t []
ggml_backend_vk_reg_get_device
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12945
['ggml_backend_reg_t', 'size_t'] ggml_backend_dev_t []
ggml_backend_vk_device_get_buffer_type
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12496
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_vk_set_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11830
['ggml_backend_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_vk_get_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11853
['ggml_backend_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_vk_cpy_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11876
['ggml_backend_t', 'ggml_tensor*', 'ggml_tensor*'] bool []
ggml_backend_vk_device_get_host_buffer_type
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12501
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_registry
in /src/llama.cpp/ggml/src/ggml-backend-reg.cpp:180
[] void []
ggml_backend_vk_device_supports_op
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12534
['ggml_backend_dev_t', 'ggml_tensor*'] bool []
llama_model::load_tensors
in /src/llama.cpp/src/llama-model.cpp:2100
['llama_model_loader'] bool []
Java_android_llama_cpp_LLamaAndroid_new_1context
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:109
['JNIEnv*', 'jlong'] JNIEXPORT []
mtmd_cli_context
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:89
['common_params'] void []
load_model
in /src/llama.cpp/tools/server/server.cpp:2380
['common_params'] bool []
test_max_size_too_many_tensors
in /src/llama.cpp/tests/test-alloc.cpp:267
[] void []
test_max_size_tensor_too_large
in /src/llama.cpp/tests/test-alloc.cpp:290
[] void []
test_tensor_larger_than_max_size
in /src/llama.cpp/tests/test-alloc.cpp:310
[] void []
test_not_enough_chunks
in /src/llama.cpp/tests/test-alloc.cpp:328
[] void []
test_fill_leftover_space
in /src/llama.cpp/tests/test-alloc.cpp:353
[] void []
test_view_inplace
in /src/llama.cpp/tests/test-alloc.cpp:371
[] void []
~ggml_backend_sycl_buffer_context()
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:309
[] void []
~ggml_sycl_pool_leg()
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:1190
[] void []
~ggml_sycl_pool_host()
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:1295
[] void []
ggml_backend_sycl_host_buffer_free_buffer
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:1136
['ggml_backend_buffer_t'] void []
ggml_backend_sycl_buffer_reset
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:545
['ggml_backend_buffer_t'] void []
~ggml_backend_sycl_split_buffer_context()
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:778
[] void []
~device_memory()
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2814
[] void []
test_merge_free_block
in /src/llama.cpp/tests/test-alloc.cpp:414
['size_t'] void []
test_prefer_already_allocated_memory
in /src/llama.cpp/tests/test-alloc.cpp:439
[] void []
test_reallocation
in /src/llama.cpp/tests/test-alloc.cpp:553
[] void []
lora_merge_ctx
in /src/llama.cpp/tools/export-lora/export-lora.cpp:130
['std::string', 'std::vector ', 'std::string', 'int'] void []
alloc_compute_meta
in /src/llama.cpp/tools/mtmd/clip.cpp:2798
['clip_ctx'] void []
run_merge
in /src/llama.cpp/tools/export-lora/export-lora.cpp:186
[] void []
~lora_merge_ctx()
in /src/llama.cpp/tools/export-lora/export-lora.cpp:398
[] void []
operator()(ggml_gallocr_tgalloc)
in /src/llama.cpp/ggml/include/ggml-cpp.h:25
['ggml_gallocr_t'] void []
Java_android_llama_cpp_LLamaAndroid_load_1model
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:83
['JNIEnv*', 'jstring'] JNIEXPORT []
llama_model_load_from_splits
in /src/llama.cpp/src/llama.cpp:306
['char**', 'size_t', 'struct llama_model_params'] struct llama_model []
common_opt_dataset_init
in /src/llama.cpp/common/common.cpp:1562
['struct llama_context*', 'std::vector ', 'int64_t'] ggml_opt_dataset_t []
llama_context::opt_init
in /src/llama.cpp/src/llama-context.cpp:2065
['struct llama_model*', 'struct llama_opt_params'] void []
test_roundtrip
in /src/llama.cpp/tests/test-gguf.cpp:1073
['ggml_backend_dev_t', 'unsigned int', 'bool'] std::pair []
test_gguf_set_kv
in /src/llama.cpp/tests/test-gguf.cpp:1203
['ggml_backend_dev_t', 'unsigned int'] std::pair []
PCA::pca_model
in /src/llama.cpp/tools/cvector-generator/pca.hpp:63
['struct ggml_tensor*'] void []
llama_model::create_memory
in /src/llama.cpp/src/llama-model.cpp:19368
['llama_memory_params', 'llama_cparams'] llama_memory_i []
llama_adapter_cvec::apply
in /src/llama.cpp/src/llama-adapter.cpp:94
['llama_model', 'float*', 'size_t', 'int32_t', 'int32_t', 'int32_t'] bool []
ggml_log_internal
in /src/llama.cpp/ggml/src/ggml.c:268
['enum ggml_log_level', 'char*'] void []
ggml_backend_cpu_buffer_free_buffer
in /src/llama.cpp/ggml/src/ggml-backend.cpp:2075
['ggml_backend_buffer_t'] void []
show_test_coverage
in /src/llama.cpp/tests/test-backend-ops.cpp:7076
[] void []
test_handcrafted_file
in /src/llama.cpp/tests/test-gguf.cpp:662
['unsigned int'] std::pair []
~file_input()
in /src/llama.cpp/tools/export-lora/export-lora.cpp:108
[] void []
~pca_model()
in /src/llama.cpp/tools/cvector-generator/pca.hpp:127
[] void []
IMatrixCollector::collect_imatrix
in /src/llama.cpp/tools/imatrix/imatrix.cpp:219
['struct ggml_tensor*', 'bool', 'void*'] bool []
IMatrixCollector::load_imatrix
in /src/llama.cpp/tools/imatrix/imatrix.cpp:716
['char*'] bool []
gguf_merge
in /src/llama.cpp/tools/gguf-split/gguf-split.cpp:398
['split_params'] void []
show_statistics
in /src/llama.cpp/tools/imatrix/imatrix.cpp:1084
['common_params'] bool []
prepare_imatrix
in /src/llama.cpp/tools/quantize/quantize.cpp:331
['std::string', 'std::vector ', 'std::vector ', 'std::vector ', 'std::unordered_map >'] int []
llama_context::~llama_context()
in /src/llama.cpp/src/llama-context.cpp:401
[] void []
clip_model_loader
in /src/llama.cpp/tools/mtmd/clip.cpp:2148
['char*'] void []
file_input
in /src/llama.cpp/tools/export-lora/export-lora.cpp:69
['std::string', 'float'] void []
gguf_split
in /src/llama.cpp/tools/gguf-split/gguf-split.cpp:360
['split_params'] void []
llama_model_loader::llama_model_loader
in /src/llama.cpp/src/llama-model-loader.cpp:471
['std::string', 'std::vector ', 'bool', 'bool', 'llama_model_kv_override*', 'llama_model_tensor_buft_override*'] void []
common_params_parse
in /src/llama.cpp/common/arg.cpp:1629
['int', 'char**', 'common_params', 'llama_example'] bool []
gguf_ex_read_0
in /src/llama.cpp/examples/gguf/gguf.cpp:86
['std::string'] bool []
gguf_ex_read_1
in /src/llama.cpp/examples/gguf/gguf.cpp:150
['std::string', 'bool'] bool []
gguf_hash
in /src/llama.cpp/examples/gguf-hash/gguf-hash.cpp:286
['hash_params'] hash_exit_code_t []
ggml_graph_compute_helper
in /src/llama.cpp/tests/test-rope.cpp:116
['std::vector ', 'ggml_cgraph*', 'int'] void []
ggml_backend_cpu_graph_plan_compute
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:159
['ggml_backend_t', 'ggml_backend_graph_plan_t'] enum ggml_status []
ggml_backend_cpu_graph_compute
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:167
['ggml_backend_t', 'struct ggml_cgraph*'] enum ggml_status []
ggml_backend_cpu_device_init_backend
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:389
['ggml_backend_dev_t', 'char*'] ggml_backend_t []
ggml_backend_cpu_get_features
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:514
['ggml_backend_reg_t'] ggml_backend_feature []
ggml::cpu::repack::extra_buffer_type::supports_op
in /src/llama.cpp/ggml/src/ggml-cpu/repack.cpp:1920
['struct ggml_tensor*'] bool []
ggml::cpu::repack::extra_buffer_type::get_tensor_traits
in /src/llama.cpp/ggml/src/ggml-cpu/repack.cpp:1956
['struct ggml_tensor*'] ggml::cpu::tensor_traits []
ggml_backend_cpu_device_get_extra_buffers_type
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:76
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_cpu_device_supports_op
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:409
['ggml_backend_dev_t', 'struct ggml_tensor*'] bool []
ggml_backend_cpu_device_supports_buft
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:462
['ggml_backend_dev_t', 'ggml_backend_buffer_type_t'] bool []
ggml_graph_compute_secondary_thread
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:2976
['void*'] thread_ret_t []
ggml_backend_cpu_graph_plan_create
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:128
['ggml_backend_t', 'struct ggml_cgraph*'] ggml_backend_graph_plan_t []
ggml::cpu::kleidiai::extra_buffer_type::supports_op
in /src/llama.cpp/ggml/src/ggml-cpu/kleidiai/kleidiai.cpp:536
['struct ggml_tensor*'] bool []
ggml::cpu::kleidiai::extra_buffer_type::get_tensor_traits
in /src/llama.cpp/ggml/src/ggml-cpu/kleidiai/kleidiai.cpp:553
['struct ggml_tensor*'] ggml::cpu::tensor_traits []
ggml::cpu::riscv64_spacemit::extra_buffer_type::supports_op
in /src/llama.cpp/ggml/src/ggml-cpu/spacemit/ime.cpp:958
['struct ggml_tensor*'] bool []
ggml::cpu::riscv64_spacemit::extra_buffer_type::get_tensor_traits
in /src/llama.cpp/ggml/src/ggml-cpu/spacemit/ime.cpp:985
['struct ggml_tensor*'] ggml::cpu::tensor_traits []
ggml::cpu::amx::extra_buffer_type::supports_op
in /src/llama.cpp/ggml/src/ggml-cpu/amx/amx.cpp:143
['struct ggml_tensor*'] bool []
ggml::cpu::amx::extra_buffer_type::get_tensor_traits
in /src/llama.cpp/ggml/src/ggml-cpu/amx/amx.cpp:167
['struct ggml_tensor*'] ggml::cpu::tensor_traits []
operator()(ggml_context*ctx)
in /src/llama.cpp/ggml/include/ggml-cpp.h:17
['ggml_context*'] void []
Java_android_llama_cpp_LLamaAndroid_backend_1init
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:331
[] JNIEXPORT []
gguf_ex_write
in /src/llama.cpp/examples/gguf/gguf.cpp:21
['std::string'] bool []
ggml_backend_metal_free
in /src/llama.cpp/ggml/src/ggml-metal/ggml-metal.cpp:373
['ggml_backend_t'] void []
ggml_backend_sycl_buffer_set_tensor
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:384
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_sycl_buffer_cpy_tensor
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:442
['ggml_backend_buffer_t', 'ggml_tensor*', 'ggml_tensor*'] bool []
~ggml_sycl_pool_alloc()
in /src/llama.cpp/ggml/src/ggml-sycl/common.hpp:240
[] void []
~host_buffer()
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2141
[] void []
ggml_backend_opencl_init
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3033
[] ggml_backend_t []
ggml_backend_opencl_buffer_init_tensor
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3222
['ggml_backend_buffer_t', 'ggml_tensor*'] enum ggml_status []
ggml_backend_opencl_buffer_set_tensor
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3279
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_opencl_buffer_get_tensor
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3680
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_opencl_buffer_clear
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3786
['ggml_backend_buffer_t', 'uint8_t'] void []
ggml_backend_opencl_buffer_type_alloc_buffer
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3825
['ggml_backend_buffer_type_t', 'size_t'] ggml_backend_buffer_t []
ggml_backend_opencl_buffer_type_get_alignment
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3843
['ggml_backend_buffer_type_t'] size_t []
ggml_backend_opencl_buffer_type_get_max_size
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3848
['ggml_backend_buffer_type_t'] size_t []
ggml_backend_opencl_device_init
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3913
['ggml_backend_dev_t', 'char*'] ggml_backend_t []
ggml_backend_opencl_device_supports_buft
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3954
['ggml_backend_dev_t', 'ggml_backend_buffer_type_t'] bool []
dump_tensor
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:4059
['ggml_backend_t', 'struct ggml_tensor*'] void []
ggml_backend_amx_buffer_free_buffer
in /src/llama.cpp/ggml/src/ggml-cpu/amx/amx.cpp:45
['ggml_backend_buffer_t'] void []
ggml_backend_zdnn_buffer_free_buffer
in /src/llama.cpp/ggml/src/ggml-zdnn/ggml-zdnn.cpp:199
['ggml_backend_buffer_t'] void []
ggml_backend_zdnn_free
in /src/llama.cpp/ggml/src/ggml-zdnn/ggml-zdnn.cpp:403
['ggml_backend_t'] void []
~ggml_cann_pool_alloc()
in /src/llama.cpp/ggml/src/ggml-cann/common.h:173
[] void []
ggml_backend_cann_buffer_set_tensor
in /src/llama.cpp/ggml/src/ggml-cann/ggml-cann.cpp:1241
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_cann_buffer_get_tensor
in /src/llama.cpp/ggml/src/ggml-cann/ggml-cann.cpp:1286
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
~mtmd_cli_context()
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:123
[] void []
perplexity
in /src/llama.cpp/tools/perplexity/perplexity.cpp:441
['llama_context*', 'common_params', 'int32_t'] results_perplexity []
hellaswag_score
in /src/llama.cpp/tools/perplexity/perplexity.cpp:741
['llama_context*', 'common_params'] void []
multiple_choice_score
in /src/llama.cpp/tools/perplexity/perplexity.cpp:1402
['llama_context*', 'common_params'] void []
kl_divergence
in /src/llama.cpp/tools/perplexity/perplexity.cpp:1692
['llama_context*', 'common_params'] void []
compute_imatrix
in /src/llama.cpp/tools/imatrix/imatrix.cpp:909
['llama_context*', 'common_params', 'int32_t'] bool []
~server_context()
in /src/llama.cpp/tools/server/server.cpp:2360
[] void []
process_single_task
in /src/llama.cpp/tools/server/server.cpp:3314
['server_task'] void []
diffusion_generate
in /src/llama.cpp/examples/diffusion/diffusion-cli.cpp:206
['llama_context*', 'llama_token*', 'llama_token*', 'int32_t', 'diffusion_params', 'int32_t'] void []
XXH_errorcode::XXH32_freeState
in /src/llama.cpp/examples/gguf-hash/deps/xxhash/xxhash.h:3126
['XXH32_state_t*'] XXH_PUBLIC_API []
XXH_errorcode::XXH64_freeState
in /src/llama.cpp/examples/gguf-hash/deps/xxhash/xxhash.h:3572
['XXH64_state_t*'] XXH_PUBLIC_API []
XXH_errorcode::XXH3_freeState
in /src/llama.cpp/examples/gguf-hash/deps/xxhash/xxhash.h:6157
['XXH3_state_t*'] XXH_PUBLIC_API []
ma_context_get_device_info__alsa
in /src/llama.cpp/vendor/miniaudio/miniaudio.h:29041
['ma_context*', 'ma_device_type', 'ma_device_id*', 'ma_device_info*'] ma_result []
ma_device_init__alsa
in /src/llama.cpp/vendor/miniaudio/miniaudio.h:29636
['ma_device*', 'ma_device_config*', 'ma_device_descriptor*', 'ma_device_descriptor*'] ma_result []

Sink functions/methods found for CWE20

Target sink Reached by fuzzer Function call path Possible branch blockers
get ['/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp', '/src/llama.cpp/fuzzers/fuzz_structured.cpp', '/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp', '/src/llama.cpp/fuzzers/fuzz_load_model.cpp', '/src/llama.cpp/fuzzers/fuzz_inference.cpp', '/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp'] N/A
Blocker function Arguments type Return type Constants touched
minja::BinaryOpExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1310
['std::shared_ptr '] Value []
ggml_backend_sycl_graph_compute
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3986
['ggml_backend_t', 'ggml_cgraph*'] ggml_status []
eval_grad
in /src/llama.cpp/tests/test-backend-ops.cpp:1457
['ggml_backend_t', 'char*', 'printer*'] bool []
mtmd_cli_context
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:89
['common_params'] void []
load_media
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:153
['std::string'] bool []
clip_graph
in /src/llama.cpp/tools/mtmd/clip.cpp:468
['clip_ctx*', 'clip_image_f32'] void []
clip_model_loader
in /src/llama.cpp/tools/mtmd/clip.cpp:2148
['char*'] void []
LlamaData::init
in /src/llama.cpp/tools/run/run.cpp:774
['Opt'] int []
&operator[](size_tindex)
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2857
['size_t'] typename std::enable_if ::type []
ggml_backend_vk_device_init
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12528
['ggml_backend_dev_t', 'char*'] ggml_backend_t []
ggml_backend_vk_reg_get_device_count
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12940
['ggml_backend_reg_t'] size_t []
ggml_backend_vk_reg_get_device
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12945
['ggml_backend_reg_t', 'size_t'] ggml_backend_dev_t []
ggml_backend_vk_device_get_buffer_type
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12496
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_vk_set_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11830
['ggml_backend_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_vk_get_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11853
['ggml_backend_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_vk_cpy_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11876
['ggml_backend_t', 'ggml_tensor*', 'ggml_tensor*'] bool []
ggml_backend_vk_device_get_host_buffer_type
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12501
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_registry
in /src/llama.cpp/ggml/src/ggml-backend-reg.cpp:180
[] void []
ggml_backend_vk_device_supports_op
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12534
['ggml_backend_dev_t', 'ggml_tensor*'] bool []
llama_model::load_tensors
in /src/llama.cpp/src/llama-model.cpp:2100
['llama_model_loader'] bool []
llama_context::decode
in /src/llama.cpp/src/llama-context.cpp:958
['llama_batch'] int []
llama_context::opt_epoch
in /src/llama.cpp/src/llama-context.cpp:2214
['ggml_opt_dataset_t', 'ggml_opt_result_t', 'ggml_opt_result_t', 'int64_t', 'ggml_opt_epoch_callback', 'ggml_opt_epoch_callback'] void []
load_model
in /src/llama.cpp/tools/server/server.cpp:2380
['common_params'] bool []
Java_android_llama_cpp_LLamaAndroid_load_1model
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:83
['JNIEnv*', 'jstring'] JNIEXPORT []
llama_model_load_from_splits
in /src/llama.cpp/src/llama.cpp:306
['char**', 'size_t', 'struct llama_model_params'] struct llama_model []
clip_image_f32_get_img
in /src/llama.cpp/tools/mtmd/clip.cpp:2996
['struct clip_image_f32_batch*', 'int'] clip_image_f32 []
clip_encode_float_image
in /src/llama.cpp/tools/mtmd/clip.cpp:4437
['struct clip_ctx*', 'int', 'float*', 'int', 'int', 'float*'] bool []
eval_message
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:199
['mtmd_cli_context', 'common_chat_msg'] int []
update_slots
in /src/llama.cpp/tools/server/server.cpp:3534
[] void []
mtmd::nx
in /src/llama.cpp/tools/mtmd/mtmd.h:259
[] uint32_t []
mtmd::ny
in /src/llama.cpp/tools/mtmd/mtmd.h:260
[] uint32_t []
mtmd::data
in /src/llama.cpp/tools/mtmd/mtmd.h:261
[] unsigned char []
mtmd::n_bytes
in /src/llama.cpp/tools/mtmd/mtmd.h:262
[] size_t []
mtmd::id
in /src/llama.cpp/tools/mtmd/mtmd.h:263
[] std::string []
mtmd::set_id
in /src/llama.cpp/tools/mtmd/mtmd.h:264
['char*'] void []
mtmd::c_ptr
in /src/llama.cpp/tools/mtmd/mtmd.h:274
[] std::vector []
*operator[](size_tidx)
in /src/llama.cpp/tools/mtmd/mtmd.h:289
['size_t'] mtmd_input_chunk []
add_media
in /src/llama.cpp/tools/mtmd/mtmd.cpp:480
['mtmd_bitmap*'] int32_t []
chat_loop
in /src/llama.cpp/tools/run/run.cpp:1323
['LlamaData', 'Opt'] int []
get_tts_version
in /src/llama.cpp/tools/tts/tts.cpp:477
['llama_model*'] outetts_version []
audio_text_from_speaker
in /src/llama.cpp/tools/tts/tts.cpp:499
['json'] std::string []
audio_data_from_speaker
in /src/llama.cpp/tools/tts/tts.cpp:512
['json'] std::string []
params_from_json_cmpl
in /src/llama.cpp/tools/server/server.cpp:295
['llama_context*', 'common_params', 'json'] slot_params []
process_single_task
in /src/llama.cpp/tools/server/server.cpp:3314
['server_task'] void []
validate
in /src/llama.cpp/tools/server/utils.hpp:1324
['struct llama_context*'] bool []
tokenize_input_prompts
in /src/llama.cpp/tools/server/utils.hpp:1480
['llama_vocab*', 'mtmd_context*', 'json', 'bool', 'bool'] std::vector []
format_rerank
in /src/llama.cpp/tools/server/utils.hpp:1497
['struct llama_model*', 'struct llama_vocab*', 'mtmd_context*', 'std::string', 'std::string'] server_tokens []
ggml_sycl_op_mul_mat_sycl
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:1929
['ggml_backend_sycl_context', 'ggml_tensor*', 'ggml_tensor*', 'ggml_tensor*', 'char*', 'float*', 'char*', 'float*', 'int64_t', 'int64_t', 'int64_t', 'int64_t', 'queue_ptr'] void []
dpct::detail::gemm_impl
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:1738
['sycl::queue', 'oneapi::math::transpose', 'oneapi::math::transpose', 'int', 'int', 'int', 'void*', 'void*', 'int', 'void*', 'int', 'void*', 'void*', 'int'] void []
operator[](size_tindex)const
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2745
['size_t'] pointer_t []
ggml_backend_zdnn_buffer_free_buffer
in /src/llama.cpp/ggml/src/ggml-zdnn/ggml-zdnn.cpp:199
['ggml_backend_buffer_t'] void []
ggml_backend_blas_graph_compute
in /src/llama.cpp/ggml/src/ggml-blas/ggml-blas.cpp:227
['ggml_backend_t', 'struct ggml_cgraph*'] enum ggml_status []
ggml_backend_cann_buffer_set_tensor
in /src/llama.cpp/ggml/src/ggml-cann/ggml-cann.cpp:1241
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
llama_model::memory_breakdown
in /src/llama.cpp/src/llama-model.cpp:6190
[] std::map []
llama_model::create_memory
in /src/llama.cpp/src/llama-model.cpp:19368
['llama_memory_params', 'llama_cparams'] llama_memory_i []
llama_memory_recurrent::state_read
in /src/llama.cpp/src/llama-memory-recurrent.cpp:736
['llama_io_read_i', 'llama_seq_id', 'llama_state_seq_flags'] void []
llama_memory_recurrent::memory_breakdown
in /src/llama.cpp/src/llama-memory-recurrent.cpp:365
[] std::map []
llama_memory_recurrent::total_size
in /src/llama.cpp/src/llama-memory-recurrent.cpp:663
[] size_t []
llama_kv_cache_iswa::get_base
in /src/llama.cpp/src/llama-kv-cache-iswa.cpp:238
[] llama_kv_cache []
llama_kv_cache_iswa::get_swa
in /src/llama.cpp/src/llama-kv-cache-iswa.cpp:242
[] llama_kv_cache []
llama_kv_cache_iswa_context::get_base
in /src/llama.cpp/src/llama-kv-cache-iswa.cpp:316
[] llama_kv_cache_context []
llama_kv_cache_iswa_context::get_swa
in /src/llama.cpp/src/llama-kv-cache-iswa.cpp:322
[] llama_kv_cache_context []
llama_model_quantize
in /src/llama.cpp/src/llama-quant.cpp:1075
['char*', 'char*', 'llama_model_quantize_params*'] uint32_t []
Java_android_llama_cpp_LLamaAndroid_new_1context
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:109
['JNIEnv*', 'jlong'] JNIEXPORT []
llama_kv_cache::init_batch
in /src/llama.cpp/src/llama-kv-cache.cpp:481
['llama_batch_allocr', 'uint32_t', 'bool'] llama_memory_context_ptr []
llama_kv_cache::state_read
in /src/llama.cpp/src/llama-kv-cache.cpp:1497
['llama_io_read_i', 'llama_seq_id', 'llama_state_seq_flags'] void []
llama_context::get_sched
in /src/llama.cpp/src/llama-context.cpp:443
[] ggml_backend_sched_t []
llama_context::get_memory
in /src/llama.cpp/src/llama-context.cpp:475
[] llama_memory_t []
llama_context::set_abort_callback
in /src/llama.cpp/src/llama-context.cpp:661
['void*'] void []
llama_context::state_set_data
in /src/llama.cpp/src/llama-context.cpp:1661
['uint8_t*', 'size_t'] size_t []
llama_context::state_load_file
in /src/llama.cpp/src/llama-context.cpp:1701
['char*', 'llama_token*', 'size_t', 'size_t*'] bool []
llama_context::get_gf_res_reserve
in /src/llama.cpp/src/llama-context.cpp:1368
[] llm_graph_result []
llama_context::opt_init
in /src/llama.cpp/src/llama-context.cpp:2065
['struct llama_model*', 'struct llama_opt_params'] void []
llama_kv_cache::memory_breakdown
in /src/llama.cpp/src/llama-kv-cache.cpp:473
[] std::map []
llama_kv_cache::total_size
in /src/llama.cpp/src/llama-kv-cache.cpp:1298
[] size_t []
llama_model_loader::llama_model_loader
in /src/llama.cpp/src/llama-model-loader.cpp:471
['std::string', 'std::vector ', 'bool', 'bool', 'llama_model_kv_override*', 'llama_model_tensor_buft_override*'] void []
llama_model_loader::load_all_data
in /src/llama.cpp/src/llama-model-loader.cpp:922
['struct ggml_context*', 'llama_buf_map', 'llama_mlocks*', 'llama_progress_callback', 'void*'] bool []
llama_vocab::impl::tokenize( conststd::string&raw_text, booladd_special, boolparse_special)const
in /src/llama.cpp/src/llama-vocab.cpp:2784
['std::string', 'bool', 'bool'] std::vector []
llama_memory_hybrid::get_mem_attn
in /src/llama.cpp/src/llama-memory-hybrid.cpp:193
[] llama_kv_cache []
llama_memory_hybrid::get_mem_recr
in /src/llama.cpp/src/llama-memory-hybrid.cpp:197
[] llama_memory_recurrent []
llama_memory_hybrid_context::get_attn
in /src/llama.cpp/src/llama-memory-hybrid.cpp:262
[] llama_kv_cache_context []
llama_memory_hybrid_context::get_recr
in /src/llama.cpp/src/llama-memory-hybrid.cpp:266
[] llama_memory_recurrent_context []
llm_graph_input_mem_hybrid::get_attn
in /src/llama.cpp/src/llama-graph.h:376
[] llm_graph_input_attn_kv []
llm_graph_input_mem_hybrid::get_recr
in /src/llama.cpp/src/llama-graph.h:377
[] llm_graph_input_rs []
llm_graph_result::get_ctx
in /src/llama.cpp/src/llama-graph.h:478
[] ggml_context []
common_params_parse
in /src/llama.cpp/common/arg.cpp:1629
['int', 'char**', 'common_params', 'llama_example'] bool []
test_failure_left_recursion
in /src/llama.cpp/tests/test-grammar-integration.cpp:702
[] void []
test_template_output_parsers
in /src/llama.cpp/tests/test-chat.cpp:555
[] void []
oaicompat_chat_params_parse
in /src/llama.cpp/tools/server/utils.hpp:534
['json', 'oaicompat_parser_options', 'std::vector '] json []
common_chat_format_single
in /src/llama.cpp/common/chat.cpp:458
['struct common_chat_templates*', 'std::vector ', 'common_chat_msg', 'bool', 'bool'] std::string []
format_input_text
in /src/llama.cpp/examples/diffusion/diffusion-cli.cpp:513
['std::string', 'std::string', 'bool', 'llama_model*'] std::string []
export_md
in /src/llama.cpp/examples/gen-docs/gen-docs.cpp:50
['std::string', 'llama_example'] void []
*operator->()
in /src/llama.cpp/vendor/cpp-httplib/httplib.h:1353
[] Response []
lexer::scan_string
in /src/llama.cpp/vendor/nlohmann/json.hpp:7277
[] token_type []
lexer::scan_comment
in /src/llama.cpp/vendor/nlohmann/json.hpp:7867
[] bool []
lexer::scan_number
in /src/llama.cpp/vendor/nlohmann/json.hpp:7992
[] token_type []
parser::parser
in /src/llama.cpp/vendor/nlohmann/json.hpp:12929
['InputAdapterType'] void []
parser::parse
in /src/llama.cpp/vendor/nlohmann/json.hpp:12951
['bool', 'BasicJsonType'] void []
(3) boolsax_parse(constinput_format_tformat, json_sax_t*sax_, constboolstrict=true, constcbor_tag_handler_ttag_handler=cbor_tag_handler_t::error)
in /src/llama.cpp/vendor/nlohmann/json.hpp:9882
['input_format_t', 'json_sax_t*'] JSON_HEDLEY_NON_NULL []
test_json_healing
in /src/llama.cpp/tests/test-json-partial.cpp:16
[] void []
ggml_backend_rpc_start_server
in /src/llama.cpp/ggml/src/ggml-rpc/ggml-rpc.cpp:1714
['char*', 'char*', 'size_t', 'size_t', 'ggml_backend_dev_t*', 'size_t*', 'size_t*'] void []
common_chat_msg_parser::consume_json
in /src/llama.cpp/common/chat-parser.cpp:392
[] common_json []
test_json_with_dumped_args
in /src/llama.cpp/tests/test-chat-parser.cpp:429
[] void []
common_chat_msg_parser::consume_json_with_dumped_args
in /src/llama.cpp/common/chat-parser.cpp:399
['std::vector >', 'std::vector >'] common_chat_msg_parser::consume_json_result []
parser::accept
in /src/llama.cpp/vendor/nlohmann/json.hpp:13011
[] bool []
from_cbor
in /src/llama.cpp/vendor/nlohmann/json.hpp:24465
['detail::span_input_adapter'] basic_json []
from_msgpack
in /src/llama.cpp/vendor/nlohmann/json.hpp:24520
['detail::span_input_adapter'] basic_json []
from_ubjson
in /src/llama.cpp/vendor/nlohmann/json.hpp:24574
['detail::span_input_adapter'] basic_json []
from_bson
in /src/llama.cpp/vendor/nlohmann/json.hpp:24658
['detail::span_input_adapter'] basic_json []
minja::chat_template::chat_template
in /src/llama.cpp/vendor/minja/chat-template.hpp:109
['std::string', 'std::string', 'std::string'] void []
operator<(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:341
['Value'] bool []
operator>(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:350
['Value'] bool []
operator==(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:359
['Value'] bool []
&at(constValue&index)
in /src/llama.cpp/vendor/minja/minja.hpp:419
['Value'] Value []
operator-()const
in /src/llama.cpp/vendor/minja/minja.hpp:454
[] Value []
operator+(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:468
['Value'] Value []
operator-(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:482
['Value'] Value []
operator*(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:488
['Value'] Value []
operator/(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:501
['Value'] Value []
operator%(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:507
['Value'] Value []
Value::get ()const
in /src/llama.cpp/vendor/minja/minja.hpp:544
[] json []
operator()(constminja::Value&v)const
in /src/llama.cpp/vendor/minja/minja.hpp:578
['minja::Value'] size_t []
minja::SetNode::do_render
in /src/llama.cpp/vendor/minja/minja.hpp:1113
['std::shared_ptr '] void []
minja::SubscriptExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1218
['std::shared_ptr '] Value []
minja::MethodCallExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1470
['std::shared_ptr '] Value []
minja::FilterExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1619
['std::shared_ptr '] Value []

Sink functions/methods found for CWE22

Target sink Reached by fuzzer Function call path Possible branch blockers
write ['/src/llama.cpp/fuzzers/fuzz_structured.cpp', '/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp', '/src/llama.cpp/fuzzers/fuzz_load_model.cpp', '/src/llama.cpp/fuzzers/fuzz_inference.cpp', '/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp'] N/A
Blocker function Arguments type Return type Constants touched
run_merge
in /src/llama.cpp/tools/export-lora/export-lora.cpp:186
[] void []
gguf_merge
in /src/llama.cpp/tools/gguf-split/gguf-split.cpp:398
['split_params'] void []
llama_model_quantize
in /src/llama.cpp/src/llama-quant.cpp:1075
['char*', 'char*', 'llama_model_quantize_params*'] uint32_t []
linenoise
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1832
['char*'] char []
linenoiseShow
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1323
['struct linenoiseState*'] void []
linenoiseHide
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1315
['struct linenoiseState*'] void []
server_sent_event
in /src/llama.cpp/tools/server/utils.hpp:462
['httplib::DataSink', 'json'] bool []
copy_file_to_file
in /src/llama.cpp/tools/gguf-split/gguf-split.cpp:349
['std::ifstream', 'std::ofstream', 'size_t', 'size_t'] void []
test_roundtrip
in /src/llama.cpp/tests/test-gguf.cpp:1073
['ggml_backend_dev_t', 'unsigned int', 'bool'] std::pair []
LlamaData::init
in /src/llama.cpp/tools/run/run.cpp:774
['Opt'] int []
&operator[](size_tindex)
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2857
['size_t'] typename std::enable_if ::type []
ggml_backend_vk_device_init
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12528
['ggml_backend_dev_t', 'char*'] ggml_backend_t []
ggml_backend_vk_reg_get_device_count
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12940
['ggml_backend_reg_t'] size_t []
ggml_backend_vk_reg_get_device
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12945
['ggml_backend_reg_t', 'size_t'] ggml_backend_dev_t []
ggml_backend_vk_device_get_buffer_type
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12496
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_vk_set_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11830
['ggml_backend_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_vk_get_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11853
['ggml_backend_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_vk_cpy_tensor_async
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11876
['ggml_backend_t', 'ggml_tensor*', 'ggml_tensor*'] bool []
ggml_backend_vk_device_get_host_buffer_type
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12501
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_registry
in /src/llama.cpp/ggml/src/ggml-backend-reg.cpp:180
[] void []
ggml_backend_vk_device_supports_op
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12534
['ggml_backend_dev_t', 'ggml_tensor*'] bool []
llama_model::load_tensors
in /src/llama.cpp/src/llama-model.cpp:2100
['llama_model_loader'] bool []
llama_context::decode
in /src/llama.cpp/src/llama-context.cpp:958
['llama_batch'] int []
llama_context::opt_epoch
in /src/llama.cpp/src/llama-context.cpp:2214
['ggml_opt_dataset_t', 'ggml_opt_result_t', 'ggml_opt_result_t', 'int64_t', 'ggml_opt_epoch_callback', 'ggml_opt_epoch_callback'] void []
mtmd_cli_context
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:89
['common_params'] void []
load_model
in /src/llama.cpp/tools/server/server.cpp:2380
['common_params'] bool []
Java_android_llama_cpp_LLamaAndroid_load_1model
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:83
['JNIEnv*', 'jstring'] JNIEXPORT []
llama_model_load_from_splits
in /src/llama.cpp/src/llama.cpp:306
['char**', 'size_t', 'struct llama_model_params'] struct llama_model []
export_gguf
in /src/llama.cpp/tools/cvector-generator/cvector-generator.cpp:353
['std::vector ', 'std::string', 'std::string'] void []
IMatrixCollector::collect_imatrix
in /src/llama.cpp/tools/imatrix/imatrix.cpp:219
['struct ggml_tensor*', 'bool', 'void*'] bool []
llama_model_save_to_file
in /src/llama.cpp/src/llama.cpp:321
['struct llama_model*', 'char*'] void []
gguf_ex_write
in /src/llama.cpp/examples/gguf/gguf.cpp:21
['std::string'] bool []
save_as_llama_model
in /src/llama.cpp/examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp:635
['struct my_llama_vocab*', 'struct my_llama_model*', 'TransformerWeights*', 'char*'] void []
llama_io_write_i::write_string
in /src/llama.cpp/src/llama-io.cpp:3
['std::string'] void []
llama_memory_recurrent::state_write
in /src/llama.cpp/src/llama-memory-recurrent.cpp:696
['llama_io_write_i', 'llama_seq_id', 'llama_state_seq_flags'] void []
llama_context::state_get_size
in /src/llama.cpp/src/llama-context.cpp:1641
[] size_t []
llama_context::state_get_data
in /src/llama.cpp/src/llama-context.cpp:1651
['uint8_t*', 'size_t'] size_t []
llama_context::state_save_file
in /src/llama.cpp/src/llama-context.cpp:1744
['char*', 'llama_token*', 'size_t'] bool []
llama_kv_cache::state_write
in /src/llama.cpp/src/llama-kv-cache.cpp:1444
['llama_io_write_i', 'llama_seq_id', 'llama_state_seq_flags'] void []
ma_device_data_loop_wakeup__alsa
in /src/llama.cpp/vendor/miniaudio/miniaudio.h:29946
['ma_device*'] ma_result []
ma_device_write__audio4
in /src/llama.cpp/vendor/miniaudio/miniaudio.h:38366
['ma_device*', 'void*', 'ma_uint32', 'ma_uint32*'] ma_result []
ma_device_write__oss
in /src/llama.cpp/vendor/miniaudio/miniaudio.h:38957
['ma_device*', 'void*', 'ma_uint32', 'ma_uint32*'] ma_result []
DataSink::data_sink_streambuf::xsputn
in /src/llama.cpp/vendor/cpp-httplib/httplib.h:655
['char*', 'std::streamsize'] std::streamsize []
Server::process_and_close_socket
in /src/llama.cpp/vendor/cpp-httplib/httplib.h:8467
['socket_t'] bool []