Fuzz introspector
For issues and ideas: https://github.com/ossf/fuzz-introspector/issues

Project functions overview

The following table shows data about each function in the project. The functions included in this table correspond to all functions that exist in the executables of the fuzzers. As such, there may be functions that are from third-party libraries.

For further technical details on the meaning of columns in the below table, please see the Glossary .

Func name Functions filename Args Function call depth Reached by Fuzzers Runtime reached by Fuzzers Combined reached by Fuzzers Fuzzers runtime hit Func lines hit % I Count BB Count Cyclomatic complexity Functions reached Reached by functions Accumulated cyclomatic complexity Undiscovered complexity

Fuzzer details

Fuzzer: fuzzers/fuzz_json_to_grammar.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_apply_template.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_structurally_created.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_structured.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_tokenizer.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_inference.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_grammar.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Fuzzer: fuzzers/fuzz_load_model.cpp

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The project has no code coverage. Will not display blockers as blockers depend on code coverage.

Files and Directories in report

This section shows which files and directories are considered in this report. The main reason for showing this is fuzz introspector may include more code in the reasoning than is desired. This section helps identify if too many files/directories are included, e.g. third party code, which may be irrelevant for the threat model. In the event too much is included, fuzz introspector supports a configuration file that can exclude data from the report. See the following link for more information on how to create a config file: link

Files in report

Source file Reached by Covered by
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/quants.c [] []
/src/llama.cpp/src/llama-context.cpp ['fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_tokenizer.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tests/test-quantize-stats.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/dequantize.hpp [] []
/src/llama.cpp/tools/cvector-generator/mean.hpp [] []
/src/llama.cpp/tools/quantize/quantize.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tools/cvector-generator/pca.hpp [] []
/src/llama.cpp/tests/get-model.cpp [] []
/src/llama.cpp/src/llama-kv-cells.h ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/common.hpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/simd-mappings.h [] []
/src/llama.cpp/examples/gguf/gguf.cpp [] []
/src/llama.cpp/src/llama-model.cpp ['fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/src/llama-vocab.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/common/regex-partial.h [] []
/src/llama.cpp/tests/test-quantize-perf.cpp [] []
/src/llama.cpp/common/chat.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/dmmv.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tools/tokenize/tokenize.cpp [] []
/src/llama.cpp/ggml/src/ggml-opt.cpp [] []
/src/llama.cpp/common/sampling.cpp [] []
/src/llama.cpp/common/llguidance.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/ops.cpp [] []
/src/llama.cpp/src/llama-graph.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/quants.c [] []
/src/llama.cpp/src/llama-impl.h [] []
/src/llama.cpp/fuzzers/fuzz_inference.cpp ['fuzzers/fuzz_inference.cpp'] []
/src/llama.cpp/src/llama-memory-recurrent.cpp [] []
/src/llama.cpp/common/common.h [] []
/src/llama.cpp/common/json-schema-to-grammar.cpp ['fuzzers/fuzz_json_to_grammar.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/binary-ops.cpp [] []
/src/llama.cpp/common/chat-parser.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/softmax.cpp [] []
/src/llama.cpp/ggml/src/ggml-cann/acl_tensor.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/mmvq.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/gla.cpp [] []
/src/llama.cpp/src/llama-model-loader.cpp ['fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tools/mtmd/mtmd.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-threading.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/examples/gen-docs/gen-docs.cpp [] []
/src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp [] []
/src/llama.cpp/pocs/vdot/vdot.cpp [] []
/src/llama.cpp/common/ngram-cache.cpp [] []
/src/llama.cpp/ggml/src/ggml-impl.h ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/fuzzers/fuzz_grammar.cpp ['fuzzers/fuzz_grammar.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/element_wise.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/common.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/loongarch/quants.c [] []
/src/llama.cpp/ggml/src/ggml-cpu/kleidiai/kernels.h [] []
/src/llama.cpp/examples/gguf-hash/gguf-hash.cpp [] []
/src/llama.cpp/vendor/nlohmann/json_fwd.hpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/binbcast.cpp [] []
/src/llama.cpp/ggml/src/ggml-cann/acl_tensor.cpp [] []
/src/llama.cpp/ggml/src/ggml-cuda/vendors/hip.h [] []
/src/llama.cpp/examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp ['fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tools/llama-bench/llama-bench.cpp [] []
/src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp [] []
/src/llama.cpp/src/llama-mmap.cpp ['fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/src/llama-kv-cache-unified-iswa.cpp [] []
/src/llama.cpp/src/llama-memory-hybrid.cpp [] []
/src/llama.cpp/ggml/src/ggml.c ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_tokenizer.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tests/test-chat-parser.cpp [] []
/src/llama.cpp/examples/gguf-hash/deps/sha1/sha1.c [] []
/src/llama.cpp/ggml/src/ggml-cpu/amx/mmq.cpp [] []
/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp ['fuzzers/fuzz_structurally_created.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/repack.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/cpy.cpp [] []
/src/llama.cpp/ggml/src/gguf.cpp ['fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/examples/eval-callback/eval-callback.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/llamafile/sgemm.cpp [] []
/src/llama.cpp/common/arg.cpp [] []
/src/llama.cpp/ggml/src/ggml-backend.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp ['fuzzers/fuzz_tokenizer.cpp'] []
/src/llama.cpp/common/common.cpp ['fuzzers/fuzz_tokenizer.cpp', 'fuzzers/fuzz_inference.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/tsembd.cpp [] []
/src/llama.cpp/fuzzers/fuzz_load_model.cpp ['fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/src/llama-io.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/wkv.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/mmq.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/common.h [] []
/src/llama.cpp/ggml/src/ggml-rpc/ggml-rpc.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/common/chat.cpp [] []
/src/llama.cpp/ggml/src/ggml-cann/aclnn_ops.h [] []
/src/llama.cpp/tests/test-quantize-fns.cpp [] []
/src/llama.cpp/tests/test-gbnf-validator.cpp [] []
/src/llama.cpp/src/llama-kv-cache-unified.cpp [] []
/src/llama.cpp/src/llama-quant.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/quants.hpp [] []
/src/llama.cpp/src/llama-adapter.cpp [] []
/src/llama.cpp/src/llama-graph.h [] []
/src/llama.cpp/src/llama-io.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/repack.h [] []
/src/llama.cpp/examples/gguf-hash/deps/sha256/sha256.c [] []
/src/llama.cpp/examples/parallel/parallel.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/arm/cpu-feats.cpp [] []
/src/llama.cpp/tests/test-gguf.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/examples/lookahead/lookahead.cpp [] []
/src/llama.cpp/src/llama-adapter.h [] []
/src/llama.cpp/common/speculative.cpp [] []
/src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp [] []
/src/llama.cpp/src/llama-memory-recurrent.h [] []
/src/llama.cpp/ggml/src/ggml-alloc.c ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tools/cvector-generator/cvector-generator.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/src/llama-arch.h [] []
/src/llama.cpp/src/llama-grammar.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tests/test-rope.cpp [] []
/src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/hbm.cpp [] []
/src/llama.cpp/tests/test-sampling.cpp ['fuzzers/fuzz_json_to_grammar.cpp'] []
/src/llama.cpp/tests/test-regex-partial.cpp [] []
/src/llama.cpp/tools/main/main.cpp [] []
/src/llama.cpp/ggml/include/ggml-cpp.h [] []
/src/llama.cpp/src/llama-kv-cache-unified.h [] []
/src/llama.cpp/examples/retrieval/retrieval.cpp [] []
/src/llama.cpp/common/arg.h [] []
/src/llama.cpp/vendor/cpp-httplib/httplib.h [] []
/src/llama.cpp/tools/server/utils.hpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/fuzzers/fuzz_apply_template.cpp ['fuzzers/fuzz_apply_template.cpp'] []
/src/llama.cpp/tools/mtmd/mtmd-cli.cpp [] []
/src/llama.cpp/vendor/stb/stb_image.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/element_wise.hpp [] []
/src/llama.cpp/common/log.h [] []
/src/llama.cpp/common/json-partial.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/common/console.cpp [] []
/src/llama.cpp/tests/test-backend-ops.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tools/mtmd/mtmd-audio.cpp [] []
/src/llama.cpp/src/llama-memory.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/getrows.cpp [] []
/src/llama.cpp/tests/test-grammar-parser.cpp [] []
/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp ['fuzzers/fuzz_json_to_grammar.cpp'] []
/src/llama.cpp/src/llama-kv-cache-unified-iswa.h [] []
/src/llama.cpp/ggml/src/ggml-backend-impl.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/amx/common.h [] []
/src/llama.cpp/tools/export-lora/export-lora.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/examples/deprecation-warning/deprecation-warning.cpp [] []
/src/llama.cpp/tools/rpc/rpc-server.cpp [] []
/src/llama.cpp/tests/test-tokenizer-0.cpp [] []
/src/llama.cpp/tools/perplexity/perplexity.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/src/llama-model-saver.cpp [] []
/src/llama.cpp/vendor/minja/chat-template.hpp [] []
/src/llama.cpp/src/llama-model-loader.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/gemm.hpp [] []
/src/llama.cpp/ggml/src/ggml-cann/aclnn_ops.cpp [] []
/src/llama.cpp/tools/mtmd/clip-impl.h ['fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-common.h [] []
/src/llama.cpp/src/llama-arch.cpp [] []
/src/llama.cpp/tools/tts/tts.cpp [] []
/src/llama.cpp/tests/test-grammar-llguidance.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/outprod.cpp [] []
/src/llama.cpp/ggml/src/ggml.cpp [] []
/src/llama.cpp/src/llama-impl.cpp ['fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/src/llama-memory.cpp [] []
/src/llama.cpp/ggml/src/ggml-cann/common.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/powerpc/cpu-feats.cpp [] []
/src/llama.cpp/src/unicode.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/convert.cpp [] []
/src/llama.cpp/examples/embedding/embedding.cpp [] []
/src/llama.cpp/tests/test-json-partial.cpp [] []
/src/llama.cpp/tools/run/linenoise.cpp/linenoise.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/rope.cpp [] []
/src/llama.cpp/tests/test-grammar-integration.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/repack.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-quants.c ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/amx/amx.cpp [] []
/src/llama.cpp/include/llama.h [] []
/src/llama.cpp/common/regex-partial.cpp [] []
/src/llama.cpp/tests/test-double-float.cpp [] []
/src/llama.cpp/src/llama-memory-hybrid.h [] []
/src/llama.cpp/ggml/src/ggml-sycl/im2col.cpp [] []
/src/llama.cpp/src/llama-batch.cpp ['fuzzers/fuzz_inference.cpp'] []
/src/llama.cpp/fuzzers/fuzz_structured.cpp ['fuzzers/fuzz_structured.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/vec.h ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tests/test-chat-template.cpp [] []
/src/llama.cpp/tools/mtmd/clip.cpp ['fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tools/mtmd/mtmd-helper.cpp [] []
/src/llama.cpp/src/unicode.h [] []
/src/llama.cpp/vendor/minja/minja.hpp [] []
/src/llama.cpp/src/llama-sampling.cpp [] []
/src/llama.cpp/tests/test-json-schema-to-grammar.cpp ['fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/vendor/miniaudio/miniaudio.h ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/tools/mtmd/mtmd.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/kleidiai/kernels.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/concat.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/conv.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/arm/quants.c [] []
/src/llama.cpp/src/llama-mmap.h [] []
/src/llama.cpp/common/log.cpp [] []
/src/llama.cpp/tests/test-opt.cpp [] []
/src/llama.cpp/vendor/nlohmann/json.hpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_tokenizer.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_grammar.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-sycl/binbcast.hpp [] []
/src/llama.cpp/pocs/vdot/q8dot.cpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/vecdotq.hpp [] []
/src/llama.cpp/examples/gguf-hash/deps/xxhash/xxhash.h [] []
/src/llama.cpp/tools/gguf-split/gguf-split.cpp ['fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/arm/repack.cpp [] []
/src/llama.cpp/common/ngram-cache.h [] []
/src/llama.cpp/include/llama-cpp.h [] []
/src/llama.cpp/common/chat-parser.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [] []
/src/llama.cpp/tests/test-chat.cpp [] []
/src/llama.cpp/ggml/src/ggml-cann/ggml-cann.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/vec.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/kleidiai/kleidiai.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/traits.cpp [] []
/src/llama.cpp/ggml/src/ggml-blas/ggml-blas.cpp [] []
/src/llama.cpp/ggml/src/ggml-kompute/ggml-kompute.cpp [] []
/src/llama.cpp/common/base64.hpp [] []
/src/llama.cpp/ggml/src/ggml-sycl/norm.cpp [] []
/src/llama.cpp/examples/gritlm/gritlm.cpp [] []
/src/llama.cpp/src/llama-chat.cpp ['fuzzers/fuzz_apply_template.cpp'] []
/src/llama.cpp/tools/run/run.cpp ['fuzzers/fuzz_json_to_grammar.cpp'] []
/src/llama.cpp/ggml/src/ggml-cpu/unary-ops.cpp [] []
/src/llama.cpp/tools/imatrix/imatrix.cpp [] []
/src/llama.cpp/ggml/src/ggml-backend-reg.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/src/llama-cparams.cpp [] []
/src/llama.cpp/ggml/include/ggml.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h [] []
/src/llama.cpp/examples/gguf-hash/deps/rotate-bits/rotate-bits.h [] []
/src/llama.cpp/tools/server/server.cpp ['fuzzers/fuzz_json_to_grammar.cpp', 'fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_tokenizer.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []
/src/llama.cpp/src/llama-hparams.cpp [] []
/src/llama.cpp/src/llama.cpp ['fuzzers/fuzz_apply_template.cpp', 'fuzzers/fuzz_structurally_created.cpp', 'fuzzers/fuzz_structured.cpp', 'fuzzers/fuzz_inference.cpp', 'fuzzers/fuzz_load_model.cpp'] []

Directories in report

Directory
/src/llama.cpp/vendor/stb/
/src/llama.cpp/src/
/src/llama.cpp/ggml/src/ggml-cpu/amx/
/src/llama.cpp/ggml/src/ggml-cpu/kleidiai/
/src/llama.cpp/ggml/src/ggml-opencl/
/src/llama.cpp/include/
/src/llama.cpp/ggml/include/
/src/llama.cpp/examples/eval-callback/
/src/llama.cpp/ggml/src/ggml-kompute/
/src/llama.cpp/ggml/src/ggml-sycl/
/src/llama.cpp/ggml/src/
/src/llama.cpp/tools/tts/
/src/llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/
/src/llama.cpp/examples/gguf-hash/deps/rotate-bits/
/src//src/
/src/llama.cpp/examples/parallel/
/src/llama.cpp/examples/gguf-hash/
/src/llama.cpp/ggml/src/ggml-cpu/
/src/llama.cpp/vendor/cpp-httplib/
/src/llama.cpp/ggml/src/ggml-cpu/arch/powerpc/
/src/llama.cpp/vendor/nlohmann/
/src/llama.cpp/tools/cvector-generator/
/src/llama.cpp/examples/gguf-hash/deps/sha256/
/src/llama.cpp/tools/main/
/src/llama.cpp/examples/deprecation-warning/
/src/llama.cpp/tools/run/linenoise.cpp/
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/
/src/llama.cpp/tools/export-lora/
/src/llama.cpp/ggml/src/ggml-rpc/
/src/llama.cpp/examples/gguf-hash/deps/xxhash/
/src/llama.cpp/ggml/src/ggml-cpu/llamafile/
/src/llama.cpp/examples/gritlm/
/src/llama.cpp/tools/mtmd/
/src/llama.cpp/examples/embedding/
/src/llama.cpp/tools/quantize/
/src/llama.cpp/tools/gguf-split/
/src/llama.cpp/ggml/src/ggml-cuda/vendors/
/src/llama.cpp/vendor/minja/
/src/llama.cpp/pocs/vdot/
/src/llama.cpp/examples/gguf/
/src/llama.cpp/tools/tokenize/
/src/llama.cpp/fuzzers/
/src/llama.cpp/common/
/src/llama.cpp/ggml/src/ggml-cann/
/src/llama.cpp/ggml/src/ggml-sycl/dpct/
/src/llama.cpp/examples/convert-llama2c-to-ggml/
/src/llama.cpp/examples/llama.android/llama/src/main/cpp/
/src/llama.cpp/tools/llama-bench/
/src/llama.cpp/examples/gguf-hash/deps/sha1/
/src/llama.cpp/examples/lookahead/
/src/llama.cpp/ggml/src/ggml-blas/
/src/llama.cpp/ggml/src/ggml-vulkan/
/src/llama.cpp/tools/perplexity/
/src/llama.cpp/examples/gen-docs/
/src/llama.cpp/vendor/miniaudio/
/src/llama.cpp/ggml/src/ggml-cpu/arch/loongarch/
/src/llama.cpp/tests/
/src/llama.cpp/tools/run//
/src/llama.cpp/ggml/src/ggml-cpu/arch/arm/
/src/llama.cpp/examples/retrieval/
/src/llama.cpp/tools/rpc/
/src/llama.cpp/tools/run/
/src/llama.cpp/tools/imatrix/
/src/llama.cpp/tools/server/

Sink analyser for CWEs

This section contains multiple tables, each table contains a list of sink functions/methods found in the project for one of the CWE supported by the sink analyser, together with information like which fuzzers statically reach the sink functions/methods and possible call path to that sink functions/methods if it is not statically reached by any fuzzers. Column 1 is the function/method name of the sink functions/methods found in the project. Column 2 lists all fuzzers (or no fuzzers at all) that have covered that particular function method statically. Column 3 shows a list of possible call paths to reach the specific function/method call if none of the fuzzers cover the target function/method calls. Lastly, column 4 shows possible fuzzer blockers that prevent an existing fuzzer from reaching the target sink functions/methods dynamically.

Sink functions/methods found for CWE787

Target sink Reached by fuzzer Function call path Possible branch blockers
realloc ['/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp', '/src/llama.cpp/fuzzers/fuzz_structured.cpp', '/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp', '/src/llama.cpp/fuzzers/fuzz_inference.cpp', '/src/llama.cpp/fuzzers/fuzz_load_model.cpp'] N/A
Blocker function Arguments type Return type Constants touched
linenoiseAddCompletion
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1020
['linenoiseCompletions*', 'char*'] void []
linenoise
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1832
['char*'] char []
alloc_compute_meta
in /src/llama.cpp/tools/mtmd/clip.cpp:2595
['clip_ctx'] void []
LlamaData::init
in /src/llama.cpp/tools/run/run.cpp:637
['Opt'] int []
&operator[](size_tindex)
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2867
['size_t'] typename std::enable_if ::type []
llama_model::load_tensors
in /src/llama.cpp/src/llama-model.cpp:1536
['llama_model_loader'] bool []
llama_context::decode
in /src/llama.cpp/src/llama-context.cpp:880
['llama_batch'] int []
llama_context::opt_epoch
in /src/llama.cpp/src/llama-context.cpp:2107
['ggml_opt_dataset_t', 'ggml_opt_result_t', 'ggml_opt_result_t', 'int64_t', 'ggml_opt_epoch_callback', 'ggml_opt_epoch_callback'] void []
Java_android_llama_cpp_LLamaAndroid_new_1context
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:109
['JNIEnv*', 'jlong'] JNIEXPORT []
mtmd_cli_context
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:89
['common_params'] void []
load_model
in /src/llama.cpp/tools/server/server.cpp:1940
['common_params'] bool []
clip_encode_float_image
in /src/llama.cpp/tools/mtmd/clip.cpp:4130
['struct clip_ctx*', 'int', 'float*', 'int', 'int', 'float*'] bool []
eval_message
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:199
['mtmd_cli_context', 'common_chat_msg'] int []
process_chunk
in /src/llama.cpp/tools/server/utils.hpp:1309
['llama_context*', 'mtmd_context*', 'llama_pos', 'int32_t', 'llama_pos'] int32_t []
test_backend
in /src/llama.cpp/tests/test-opt.cpp:790
['ggml_backend_sched_t', 'ggml_backend_t'] std::pair []
llama_kv_cache_unified::update
in /src/llama.cpp/src/llama-kv-cache-unified.cpp:451
['llama_context*', 'bool', 'defrag_info'] bool []
Java_android_llama_cpp_LLamaAndroid_load_1model
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:83
['JNIEnv*', 'jstring'] JNIEXPORT []
llama_model_load_from_splits
in /src/llama.cpp/src/llama.cpp:251
['char**', 'size_t', 'struct llama_model_params'] struct llama_model []
common_opt_dataset_init
in /src/llama.cpp/common/common.cpp:1535
['struct llama_context*', 'std::vector ', 'int64_t'] ggml_opt_dataset_t []
llama_context::opt_init
in /src/llama.cpp/src/llama-context.cpp:1963
['struct llama_model*', 'struct llama_opt_params'] void []
minja::BinaryOpExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1304
['std::shared_ptr '] Value []
ggml_backend_sycl_graph_compute
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:4008
['ggml_backend_t', 'ggml_cgraph*'] ggml_status []
eval_grad
in /src/llama.cpp/tests/test-backend-ops.cpp:751
['ggml_backend_t', 'char*'] bool []
test_roundtrip
in /src/llama.cpp/tests/test-gguf.cpp:1073
['ggml_backend_dev_t', 'unsigned int', 'bool'] std::pair []
test_gguf_set_kv
in /src/llama.cpp/tests/test-gguf.cpp:1203
['ggml_backend_dev_t', 'unsigned int'] std::pair []
run_merge
in /src/llama.cpp/tools/export-lora/export-lora.cpp:186
[] void []
PCA::pca_model
in /src/llama.cpp/tools/cvector-generator/pca.hpp:63
['struct ggml_tensor*'] void []
llama_model::create_memory
in /src/llama.cpp/src/llama-model.cpp:14436
['llama_memory_params', 'llama_cparams'] llama_memory_i []
llama_adapter_cvec::apply
in /src/llama.cpp/src/llama-adapter.cpp:93
['llama_model', 'float*', 'size_t', 'int32_t', 'int32_t', 'int32_t'] bool []

Sink functions/methods found for CWE416

Target sink Reached by fuzzer Function call path Possible branch blockers
get ['/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp', '/src/llama.cpp/fuzzers/fuzz_structured.cpp', '/src/llama.cpp/fuzzers/fuzz_load_model.cpp', '/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp', '/src/llama.cpp/fuzzers/fuzz_inference.cpp', '/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp'] N/A
Blocker function Arguments type Return type Constants touched
minja::BinaryOpExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1304
['std::shared_ptr '] Value []
ggml_backend_sycl_graph_compute
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:4008
['ggml_backend_t', 'ggml_cgraph*'] ggml_status []
eval_grad
in /src/llama.cpp/tests/test-backend-ops.cpp:751
['ggml_backend_t', 'char*'] bool []
mtmd_cli_context
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:89
['common_params'] void []
load_media
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:153
['std::string'] bool []
clip_graph
in /src/llama.cpp/tools/mtmd/clip.cpp:444
['clip_ctx*', 'clip_image_f32'] void []
clip_model_loader
in /src/llama.cpp/tools/mtmd/clip.cpp:2004
['char*'] void []
LlamaData::init
in /src/llama.cpp/tools/run/run.cpp:637
['Opt'] int []
&operator[](size_tindex)
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2867
['size_t'] typename std::enable_if ::type []
llama_model::load_tensors
in /src/llama.cpp/src/llama-model.cpp:1536
['llama_model_loader'] bool []
llama_context::decode
in /src/llama.cpp/src/llama-context.cpp:880
['llama_batch'] int []
llama_context::opt_epoch
in /src/llama.cpp/src/llama-context.cpp:2107
['ggml_opt_dataset_t', 'ggml_opt_result_t', 'ggml_opt_result_t', 'int64_t', 'ggml_opt_epoch_callback', 'ggml_opt_epoch_callback'] void []
load_model
in /src/llama.cpp/tools/server/server.cpp:1940
['common_params'] bool []
Java_android_llama_cpp_LLamaAndroid_load_1model
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:83
['JNIEnv*', 'jstring'] JNIEXPORT []
llama_model_load_from_splits
in /src/llama.cpp/src/llama.cpp:251
['char**', 'size_t', 'struct llama_model_params'] struct llama_model []
clip_image_f32_get_img
in /src/llama.cpp/tools/mtmd/clip.cpp:2793
['struct clip_image_f32_batch*', 'int'] clip_image_f32 []
clip_encode_float_image
in /src/llama.cpp/tools/mtmd/clip.cpp:4130
['struct clip_ctx*', 'int', 'float*', 'int', 'int', 'float*'] bool []
eval_message
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:199
['mtmd_cli_context', 'common_chat_msg'] int []
process_chunk
in /src/llama.cpp/tools/server/utils.hpp:1309
['llama_context*', 'mtmd_context*', 'llama_pos', 'int32_t', 'llama_pos'] int32_t []
mtmd::nx
in /src/llama.cpp/tools/mtmd/mtmd.h:259
[] uint32_t []
mtmd::ny
in /src/llama.cpp/tools/mtmd/mtmd.h:260
[] uint32_t []
mtmd::data
in /src/llama.cpp/tools/mtmd/mtmd.h:261
[] unsigned char []
mtmd::n_bytes
in /src/llama.cpp/tools/mtmd/mtmd.h:262
[] size_t []
mtmd::id
in /src/llama.cpp/tools/mtmd/mtmd.h:263
[] std::string []
mtmd::set_id
in /src/llama.cpp/tools/mtmd/mtmd.h:264
['char*'] void []
mtmd::c_ptr
in /src/llama.cpp/tools/mtmd/mtmd.h:274
[] std::vector []
*operator[](size_tidx)
in /src/llama.cpp/tools/mtmd/mtmd.h:289
['size_t'] mtmd_input_chunk []
add_media
in /src/llama.cpp/tools/mtmd/mtmd.cpp:470
['mtmd_bitmap*'] int32_t []
check_context_size
in /src/llama.cpp/tools/run/run.cpp:972
['llama_context_ptr', 'llama_batch'] int []
chat_loop
in /src/llama.cpp/tools/run/run.cpp:1195
['LlamaData', 'Opt'] int []
get_tts_version
in /src/llama.cpp/tools/tts/tts.cpp:477
['llama_model*'] outetts_version []
audio_text_from_speaker
in /src/llama.cpp/tools/tts/tts.cpp:499
['json'] std::string []
audio_data_from_speaker
in /src/llama.cpp/tools/tts/tts.cpp:512
['json'] std::string []
params_from_json_cmpl
in /src/llama.cpp/tools/server/server.cpp:243
['llama_context*', 'common_params', 'json'] slot_params []
update_slots
in /src/llama.cpp/tools/server/server.cpp:2961
[] void []
tokenize_input_prompts
in /src/llama.cpp/tools/server/utils.hpp:199
['llama_vocab*', 'json', 'bool', 'bool'] std::vector []
get_common_prefix
in /src/llama.cpp/tools/server/utils.hpp:1254
['server_tokens'] size_t []
validate
in /src/llama.cpp/tools/server/utils.hpp:1286
['struct llama_context*'] bool []
ggml_sycl_op_mul_mat_sycl
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:2062
['ggml_backend_sycl_context', 'ggml_tensor*', 'ggml_tensor*', 'ggml_tensor*', 'char*', 'float*', 'char*', 'float*', 'int64_t', 'int64_t', 'int64_t', 'int64_t', 'queue_ptr'] void []
dpct::detail::gemm_impl
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:1748
['sycl::queue', 'oneapi::math::transpose', 'oneapi::math::transpose', 'int', 'int', 'int', 'void*', 'void*', 'int', 'void*', 'int', 'void*', 'void*', 'int'] void []
operator[](size_tindex)const
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2755
['size_t'] pointer_t []
ggml_backend_blas_graph_compute
in /src/llama.cpp/ggml/src/ggml-blas/ggml-blas.cpp:227
['ggml_backend_t', 'struct ggml_cgraph*'] enum ggml_status []
llama_model::create_memory
in /src/llama.cpp/src/llama-model.cpp:14436
['llama_memory_params', 'llama_cparams'] llama_memory_i []
llama_memory_recurrent::state_read
in /src/llama.cpp/src/llama-memory-recurrent.cpp:719
['llama_io_read_i', 'llama_seq_id'] void []
llama_memory_recurrent::total_size
in /src/llama.cpp/src/llama-memory-recurrent.cpp:648
[] size_t []
llama_model_quantize
in /src/llama.cpp/src/llama-quant.cpp:1037
['char*', 'char*', 'llama_model_quantize_params*'] uint32_t []
Java_android_llama_cpp_LLamaAndroid_new_1context
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:109
['JNIEnv*', 'jlong'] JNIEXPORT []
llama_context::get_sched
in /src/llama.cpp/src/llama-context.cpp:387
[] ggml_backend_sched_t []
llama_context::get_ctx_compute
in /src/llama.cpp/src/llama-context.cpp:391
[] ggml_context []
llama_context::get_memory
in /src/llama.cpp/src/llama-context.cpp:423
[] llama_memory_t []
llama_context::set_abort_callback
in /src/llama.cpp/src/llama-context.cpp:610
['void*'] void []
llama_context::state_set_data
in /src/llama.cpp/src/llama-context.cpp:1576
['uint8_t*', 'size_t'] size_t []
llama_context::state_load_file
in /src/llama.cpp/src/llama-context.cpp:1616
['char*', 'llama_token*', 'size_t', 'size_t*'] bool []
llama_context::opt_init
in /src/llama.cpp/src/llama-context.cpp:1963
['struct llama_model*', 'struct llama_opt_params'] void []
llama_kv_cache_unified::state_read
in /src/llama.cpp/src/llama-kv-cache-unified.cpp:1391
['llama_io_read_i', 'llama_seq_id'] void []
llama_kv_cache_unified::total_size
in /src/llama.cpp/src/llama-kv-cache-unified.cpp:917
[] size_t []
llama_kv_cache_unified_iswa::get_base
in /src/llama.cpp/src/llama-kv-cache-unified-iswa.cpp:191
[] llama_kv_cache_unified []
llama_kv_cache_unified_iswa::get_swa
in /src/llama.cpp/src/llama-kv-cache-unified-iswa.cpp:195
[] llama_kv_cache_unified []
llama_kv_cache_unified_iswa_context::get_base
in /src/llama.cpp/src/llama-kv-cache-unified-iswa.cpp:269
[] llama_kv_cache_unified_context []
llama_kv_cache_unified_iswa_context::get_swa
in /src/llama.cpp/src/llama-kv-cache-unified-iswa.cpp:275
[] llama_kv_cache_unified_context []
llama_model_loader::llama_model_loader
in /src/llama.cpp/src/llama-model-loader.cpp:468
['std::string', 'std::vector ', 'bool', 'bool', 'llama_model_kv_override*', 'llama_model_tensor_buft_override*'] void []
llama_model_loader::load_all_data
in /src/llama.cpp/src/llama-model-loader.cpp:918
['struct ggml_context*', 'llama_buf_map', 'llama_mlocks*', 'llama_progress_callback', 'void*'] bool []
llama_vocab::impl::tokenize( conststd::string&raw_text, booladd_special, boolparse_special)const
in /src/llama.cpp/src/llama-vocab.cpp:2400
['std::string', 'bool', 'bool'] std::vector []
llama_memory_hybrid::get_mem_attn
in /src/llama.cpp/src/llama-memory-hybrid.cpp:171
[] llama_kv_cache_unified []
llama_memory_hybrid::get_mem_recr
in /src/llama.cpp/src/llama-memory-hybrid.cpp:175
[] llama_memory_recurrent []
llama_memory_hybrid_context::get_attn
in /src/llama.cpp/src/llama-memory-hybrid.cpp:240
[] llama_kv_cache_unified_context []
llama_memory_hybrid_context::get_recr
in /src/llama.cpp/src/llama-memory-hybrid.cpp:244
[] llama_memory_recurrent_context []
test_failure_left_recursion
in /src/llama.cpp/tests/test-grammar-integration.cpp:702
[] void []
test_template_output_parsers
in /src/llama.cpp/tests/test-chat.cpp:552
[] void []
oaicompat_chat_params_parse
in /src/llama.cpp/tools/server/utils.hpp:590
['json', 'oaicompat_parser_options', 'std::vector '] json []
common_params_parse
in /src/llama.cpp/common/arg.cpp:1179
['int', 'char**', 'common_params', 'llama_example'] bool []
common_chat_format_single
in /src/llama.cpp/common/chat.cpp:439
['struct common_chat_templates*', 'std::vector ', 'common_chat_msg', 'bool', 'bool'] std::string []
export_md
in /src/llama.cpp/examples/gen-docs/gen-docs.cpp:50
['std::string', 'llama_example'] void []
*operator->()
in /src/llama.cpp/vendor/cpp-httplib/httplib.h:1208
[] Response []
lexer::scan_string
in /src/llama.cpp/vendor/nlohmann/json.hpp:7277
[] token_type []
lexer::scan_comment
in /src/llama.cpp/vendor/nlohmann/json.hpp:7867
[] bool []
lexer::scan_number
in /src/llama.cpp/vendor/nlohmann/json.hpp:7992
[] token_type []
parser::parser
in /src/llama.cpp/vendor/nlohmann/json.hpp:12929
['InputAdapterType'] void []
parser::parse
in /src/llama.cpp/vendor/nlohmann/json.hpp:12951
['bool', 'BasicJsonType'] void []
(3) boolsax_parse(constinput_format_tformat, json_sax_t*sax_, constboolstrict=true, constcbor_tag_handler_ttag_handler=cbor_tag_handler_t::error)
in /src/llama.cpp/vendor/nlohmann/json.hpp:9882
['input_format_t', 'json_sax_t*'] JSON_HEDLEY_NON_NULL []
test_json_healing
in /src/llama.cpp/tests/test-json-partial.cpp:16
[] void []
ggml_backend_rpc_start_server
in /src/llama.cpp/ggml/src/ggml-rpc/ggml-rpc.cpp:1595
['ggml_backend_t', 'char*', 'char*', 'size_t', 'size_t'] void []
common_chat_msg_parser::consume_json
in /src/llama.cpp/common/chat-parser.cpp:243
[] common_json []
test_json_with_dumped_args
in /src/llama.cpp/tests/test-chat-parser.cpp:211
[] void []
common_chat_msg_parser::consume_json_with_dumped_args
in /src/llama.cpp/common/chat-parser.cpp:250
['std::vector >', 'std::vector >'] common_chat_msg_parser::consume_json_result []
parser::accept
in /src/llama.cpp/vendor/nlohmann/json.hpp:13011
[] bool []
from_cbor
in /src/llama.cpp/vendor/nlohmann/json.hpp:24465
['detail::span_input_adapter'] basic_json []
from_msgpack
in /src/llama.cpp/vendor/nlohmann/json.hpp:24520
['detail::span_input_adapter'] basic_json []
from_ubjson
in /src/llama.cpp/vendor/nlohmann/json.hpp:24574
['detail::span_input_adapter'] basic_json []
from_bson
in /src/llama.cpp/vendor/nlohmann/json.hpp:24658
['detail::span_input_adapter'] basic_json []
minja::chat_template::chat_template
in /src/llama.cpp/vendor/minja/chat-template.hpp:109
['std::string', 'std::string', 'std::string'] void []
operator<(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:341
['Value'] bool []
operator>(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:350
['Value'] bool []
operator==(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:359
['Value'] bool []
&at(constValue&index)
in /src/llama.cpp/vendor/minja/minja.hpp:419
['Value'] Value []
operator-()const
in /src/llama.cpp/vendor/minja/minja.hpp:454
[] Value []
operator+(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:468
['Value'] Value []
operator-(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:482
['Value'] Value []
operator*(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:488
['Value'] Value []
operator/(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:501
['Value'] Value []
operator%(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:507
['Value'] Value []
Value::get ()const
in /src/llama.cpp/vendor/minja/minja.hpp:544
[] json []
operator()(constminja::Value&v)const
in /src/llama.cpp/vendor/minja/minja.hpp:578
['minja::Value'] size_t []
minja::SetNode::do_render
in /src/llama.cpp/vendor/minja/minja.hpp:1113
['std::shared_ptr '] void []
minja::SubscriptExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1218
['std::shared_ptr '] Value []
minja::MethodCallExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1464
['std::shared_ptr '] Value []
minja::FilterExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1583
['std::shared_ptr '] Value []
free ['/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp', '/src/llama.cpp/fuzzers/fuzz_structured.cpp', '/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp', '/src/llama.cpp/fuzzers/fuzz_inference.cpp', '/src/llama.cpp/fuzzers/fuzz_load_model.cpp'] N/A
Blocker function Arguments type Return type Constants touched
clip_log_internal
in /src/llama.cpp/tools/mtmd/clip-impl.h:222
['enum ggml_log_level', 'char*'] void []
ingest_args
in /src/llama.cpp/tools/tokenize/tokenize.cpp:80
['int', 'char**'] std::vector []
write_utf8_cstr_to_stdout
in /src/llama.cpp/tools/tokenize/tokenize.cpp:132
['char*', 'bool'] void []
linenoise
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1832
['char*'] char []
linenoiseFree
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1861
['void*'] void []
linenoiseAtExit
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1881
[] void []
chat_loop
in /src/llama.cpp/tools/run/run.cpp:1195
['LlamaData', 'Opt'] int []
linenoiseHistoryLoad
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1977
['char*'] int []
linenoiseHistorySetMaxLen
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1926
['int'] int []
~linenoiseCompletions()
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.h:78
[] void []
~train_context()
in /src/llama.cpp/tools/cvector-generator/cvector-generator.cpp:262
[] void []
minja::BinaryOpExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1304
['std::shared_ptr '] Value []
ggml_backend_sycl_graph_compute
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:4008
['ggml_backend_t', 'ggml_cgraph*'] ggml_status []
eval_grad
in /src/llama.cpp/tests/test-backend-ops.cpp:751
['ggml_backend_t', 'char*'] bool []
ggml_vk_test_dequant
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:8272
['ggml_backend_vk_context*', 'size_t', 'ggml_type'] void []
ggml_backend_vk_graph_compute
in /src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:9836
['ggml_backend_t', 'ggml_cgraph*'] ggml_status []
llama_model_quantize
in /src/llama.cpp/src/llama-quant.cpp:1037
['char*', 'char*', 'llama_model_quantize_params*'] uint32_t []
Java_android_llama_cpp_LLamaAndroid_backend_1free
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:147
[] JNIEXPORT []
PCA::run_pca
in /src/llama.cpp/tools/cvector-generator/pca.hpp:296
['struct pca_params', 'std::vector ', 'std::vector '] void []
ggml_backend_tensor_copy_async
in /src/llama.cpp/ggml/src/ggml-backend.cpp:393
['ggml_backend_t', 'ggml_backend_t', 'struct ggml_tensor*', 'struct ggml_tensor*'] void []
clip_encode_float_image
in /src/llama.cpp/tools/mtmd/clip.cpp:4130
['struct clip_ctx*', 'int', 'float*', 'int', 'int', 'float*'] bool []
eval_message
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:199
['mtmd_cli_context', 'common_chat_msg'] int []
process_chunk
in /src/llama.cpp/tools/server/utils.hpp:1309
['llama_context*', 'mtmd_context*', 'llama_pos', 'int32_t', 'llama_pos'] int32_t []
test_backend
in /src/llama.cpp/tests/test-opt.cpp:790
['ggml_backend_sched_t', 'ggml_backend_t'] std::pair []
llama_context::opt_epoch
in /src/llama.cpp/src/llama-context.cpp:2107
['ggml_opt_dataset_t', 'ggml_opt_result_t', 'ggml_opt_result_t', 'int64_t', 'ggml_opt_epoch_callback', 'ggml_opt_epoch_callback'] void []
llama_context::decode
in /src/llama.cpp/src/llama-context.cpp:880
['llama_batch'] int []
ggml_backend_multi_buffer_free_buffer
in /src/llama.cpp/ggml/src/ggml-backend.cpp:539
['ggml_backend_buffer_t'] void []
operator()(ggml_backend_sched_tsched)
in /src/llama.cpp/ggml/include/ggml-cpp.h:34
['ggml_backend_sched_t'] void []
~lora_merge_ctx()
in /src/llama.cpp/tools/export-lora/export-lora.cpp:398
[] void []
operator()(ggml_gallocr_tgalloc)
in /src/llama.cpp/ggml/include/ggml-cpp.h:25
['ggml_gallocr_t'] void []
llama_kv_cache_unified::update
in /src/llama.cpp/src/llama-kv-cache-unified.cpp:451
['llama_context*', 'bool', 'defrag_info'] bool []
alloc_compute_meta
in /src/llama.cpp/tools/mtmd/clip.cpp:2595
['clip_ctx'] void []
LlamaData::init
in /src/llama.cpp/tools/run/run.cpp:637
['Opt'] int []
&operator[](size_tindex)
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2867
['size_t'] typename std::enable_if ::type []
llama_model::load_tensors
in /src/llama.cpp/src/llama-model.cpp:1536
['llama_model_loader'] bool []
Java_android_llama_cpp_LLamaAndroid_new_1context
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:109
['JNIEnv*', 'jlong'] JNIEXPORT []
mtmd_cli_context
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:89
['common_params'] void []
load_model
in /src/llama.cpp/tools/server/server.cpp:1940
['common_params'] bool []
run_merge
in /src/llama.cpp/tools/export-lora/export-lora.cpp:186
[] void []
Java_android_llama_cpp_LLamaAndroid_load_1model
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:83
['JNIEnv*', 'jstring'] JNIEXPORT []
llama_model_load_from_splits
in /src/llama.cpp/src/llama.cpp:251
['char**', 'size_t', 'struct llama_model_params'] struct llama_model []
common_opt_dataset_init
in /src/llama.cpp/common/common.cpp:1535
['struct llama_context*', 'std::vector ', 'int64_t'] ggml_opt_dataset_t []
llama_context::opt_init
in /src/llama.cpp/src/llama-context.cpp:1963
['struct llama_model*', 'struct llama_opt_params'] void []
test_roundtrip
in /src/llama.cpp/tests/test-gguf.cpp:1073
['ggml_backend_dev_t', 'unsigned int', 'bool'] std::pair []
test_gguf_set_kv
in /src/llama.cpp/tests/test-gguf.cpp:1203
['ggml_backend_dev_t', 'unsigned int'] std::pair []
PCA::pca_model
in /src/llama.cpp/tools/cvector-generator/pca.hpp:63
['struct ggml_tensor*'] void []
llama_model::create_memory
in /src/llama.cpp/src/llama-model.cpp:14436
['llama_memory_params', 'llama_cparams'] llama_memory_i []
llama_adapter_cvec::apply
in /src/llama.cpp/src/llama-adapter.cpp:93
['llama_model', 'float*', 'size_t', 'int32_t', 'int32_t', 'int32_t'] bool []
ggml_log_internal
in /src/llama.cpp/ggml/src/ggml.c:253
['enum ggml_log_level', 'char*'] void []
ggml_backend_cpu_buffer_free_buffer
in /src/llama.cpp/ggml/src/ggml-backend.cpp:1898
['ggml_backend_buffer_t'] void []
test_handcrafted_file
in /src/llama.cpp/tests/test-gguf.cpp:662
['unsigned int'] std::pair []
~file_input()
in /src/llama.cpp/tools/export-lora/export-lora.cpp:108
[] void []
~pca_model()
in /src/llama.cpp/tools/cvector-generator/pca.hpp:127
[] void []
gguf_merge
in /src/llama.cpp/tools/gguf-split/gguf-split.cpp:398
['split_params'] void []
llama_context::~llama_context()
in /src/llama.cpp/src/llama-context.cpp:345
[] void []
clip_model_loader
in /src/llama.cpp/tools/mtmd/clip.cpp:2004
['char*'] void []
file_input
in /src/llama.cpp/tools/export-lora/export-lora.cpp:69
['std::string', 'float'] void []
gguf_split
in /src/llama.cpp/tools/gguf-split/gguf-split.cpp:360
['split_params'] void []
llama_model_loader::llama_model_loader
in /src/llama.cpp/src/llama-model-loader.cpp:468
['std::string', 'std::vector ', 'bool', 'bool', 'llama_model_kv_override*', 'llama_model_tensor_buft_override*'] void []
gguf_ex_read_0
in /src/llama.cpp/examples/gguf/gguf.cpp:86
['std::string'] bool []
gguf_ex_read_1
in /src/llama.cpp/examples/gguf/gguf.cpp:150
['std::string', 'bool'] bool []
gguf_hash
in /src/llama.cpp/examples/gguf-hash/gguf-hash.cpp:286
['hash_params'] hash_exit_code_t []
ggml_graph_compute_helper
in /src/llama.cpp/tests/test-rope.cpp:116
['std::vector ', 'ggml_cgraph*', 'int'] void []
ggml_backend_cpu_graph_plan_compute
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:145
['ggml_backend_t', 'ggml_backend_graph_plan_t'] enum ggml_status []
ggml_backend_cpu_graph_compute
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:153
['ggml_backend_t', 'struct ggml_cgraph*'] enum ggml_status []
lora_merge_ctx
in /src/llama.cpp/tools/export-lora/export-lora.cpp:130
['std::string', 'std::vector ', 'std::string', 'int'] void []
ggml_backend_cpu_device_init_backend
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:372
['ggml_backend_dev_t', 'char*'] ggml_backend_t []
ggml_backend_cpu_get_features
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:504
['ggml_backend_reg_t'] ggml_backend_feature []
ggml_backend_registry
in /src/llama.cpp/ggml/src/ggml-backend-reg.cpp:167
[] void []
ggml::cpu::repack::extra_buffer_type::supports_op
in /src/llama.cpp/ggml/src/ggml-cpu/repack.cpp:1509
['struct ggml_tensor*'] bool []
ggml::cpu::repack::extra_buffer_type::get_tensor_traits
in /src/llama.cpp/ggml/src/ggml-cpu/repack.cpp:1545
['struct ggml_tensor*'] ggml::cpu::tensor_traits []
ggml_backend_cpu_device_get_extra_buffers_type
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:68
['ggml_backend_dev_t'] ggml_backend_buffer_type_t []
ggml_backend_cpu_device_supports_buft
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:452
['ggml_backend_dev_t', 'ggml_backend_buffer_type_t'] bool []
ggml_backend_cpu_device_supports_op
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:392
['ggml_backend_dev_t', 'struct ggml_tensor*'] bool []
ggml_graph_compute_secondary_thread
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:2937
['void*'] thread_ret_t []
ggml_backend_cpu_graph_plan_create
in /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp:114
['ggml_backend_t', 'struct ggml_cgraph*'] ggml_backend_graph_plan_t []
ggml::cpu::kleidiai::extra_buffer_type::supports_op
in /src/llama.cpp/ggml/src/ggml-cpu/kleidiai/kleidiai.cpp:423
['struct ggml_tensor*'] bool []
ggml::cpu::kleidiai::extra_buffer_type::get_tensor_traits
in /src/llama.cpp/ggml/src/ggml-cpu/kleidiai/kleidiai.cpp:440
['struct ggml_tensor*'] ggml::cpu::tensor_traits []
ggml::cpu::amx::extra_buffer_type::supports_op
in /src/llama.cpp/ggml/src/ggml-cpu/amx/amx.cpp:143
['struct ggml_tensor*'] bool []
ggml::cpu::amx::extra_buffer_type::get_tensor_traits
in /src/llama.cpp/ggml/src/ggml-cpu/amx/amx.cpp:166
['struct ggml_tensor*'] ggml::cpu::tensor_traits []
operator()(ggml_context*ctx)
in /src/llama.cpp/ggml/include/ggml-cpp.h:17
['ggml_context*'] void []
Java_android_llama_cpp_LLamaAndroid_backend_1init
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:331
[] JNIEXPORT []
gguf_ex_write
in /src/llama.cpp/examples/gguf/gguf.cpp:21
['std::string'] bool []
ggml_backend_sycl_buffer_set_tensor
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:381
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_sycl_buffer_cpy_tensor
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:439
['ggml_backend_buffer_t', 'ggml_tensor*', 'ggml_tensor*'] bool []
~ggml_sycl_pool_alloc()
in /src/llama.cpp/ggml/src/ggml-sycl/common.hpp:239
[] void []
linenoiseAddCompletion
in /src/llama.cpp/tools/run/linenoise.cpp/linenoise.cpp:1020
['linenoiseCompletions*', 'char*'] void []
~host_buffer()
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2151
[] void []
~device_memory()
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2824
[] void []
ggml_backend_opencl_init
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:2376
[] ggml_backend_t []
ggml_backend_opencl_buffer_init_tensor
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:2509
['ggml_backend_buffer_t', 'ggml_tensor*'] enum ggml_status []
ggml_backend_opencl_buffer_set_tensor
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:2566
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_opencl_buffer_get_tensor
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:2826
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_opencl_buffer_clear
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:2881
['ggml_backend_buffer_t', 'uint8_t'] void []
ggml_backend_opencl_buffer_type_alloc_buffer
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:2920
['ggml_backend_buffer_type_t', 'size_t'] ggml_backend_buffer_t []
ggml_backend_opencl_buffer_type_get_alignment
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:2938
['ggml_backend_buffer_type_t'] size_t []
ggml_backend_opencl_buffer_type_get_max_size
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:2943
['ggml_backend_buffer_type_t'] size_t []
ggml_backend_opencl_device_init
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3008
['ggml_backend_dev_t', 'char*'] ggml_backend_t []
ggml_backend_opencl_device_supports_buft
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3049
['ggml_backend_dev_t', 'ggml_backend_buffer_type_t'] bool []
dump_tensor
in /src/llama.cpp/ggml/src/ggml-opencl/ggml-opencl.cpp:3154
['ggml_backend_t', 'struct ggml_tensor*'] void []
ggml_backend_amx_buffer_free_buffer
in /src/llama.cpp/ggml/src/ggml-cpu/amx/amx.cpp:45
['ggml_backend_buffer_t'] void []
~ggml_cann_pool_alloc()
in /src/llama.cpp/ggml/src/ggml-cann/common.h:171
[] void []
ggml_backend_cann_buffer_set_tensor
in /src/llama.cpp/ggml/src/ggml-cann/ggml-cann.cpp:1131
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
ggml_backend_cann_buffer_get_tensor
in /src/llama.cpp/ggml/src/ggml-cann/ggml-cann.cpp:1169
['ggml_backend_buffer_t', 'ggml_tensor*', 'void*', 'size_t', 'size_t'] void []
~mtmd_cli_context()
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:123
[] void []
perplexity
in /src/llama.cpp/tools/perplexity/perplexity.cpp:441
['llama_context*', 'common_params', 'int32_t'] results_perplexity []
hellaswag_score
in /src/llama.cpp/tools/perplexity/perplexity.cpp:741
['llama_context*', 'common_params'] void []
multiple_choice_score
in /src/llama.cpp/tools/perplexity/perplexity.cpp:1402
['llama_context*', 'common_params'] void []
kl_divergence
in /src/llama.cpp/tools/perplexity/perplexity.cpp:1688
['llama_context*', 'common_params'] void []
compute_imatrix
in /src/llama.cpp/tools/imatrix/imatrix.cpp:431
['llama_context*', 'common_params'] bool []
~server_context()
in /src/llama.cpp/tools/server/server.cpp:1920
[] void []
process_single_task
in /src/llama.cpp/tools/server/server.cpp:2745
['server_task'] void []
encode
in /src/llama.cpp/examples/gritlm/gritlm.cpp:10
['llama_context*', 'std::vector ', 'std::string'] std::vector > []
XXH_errorcode::XXH32_freeState
in /src/llama.cpp/examples/gguf-hash/deps/xxhash/xxhash.h:3126
['XXH32_state_t*'] XXH_PUBLIC_API []
XXH_errorcode::XXH64_freeState
in /src/llama.cpp/examples/gguf-hash/deps/xxhash/xxhash.h:3572
['XXH64_state_t*'] XXH_PUBLIC_API []
XXH_errorcode::XXH3_freeState
in /src/llama.cpp/examples/gguf-hash/deps/xxhash/xxhash.h:6157
['XXH3_state_t*'] XXH_PUBLIC_API []
ma_context_get_device_info__alsa
in /src/llama.cpp/vendor/miniaudio/miniaudio.h:27595
['ma_context*', 'ma_device_type', 'ma_device_id*', 'ma_device_info*'] ma_result []
ma_device_init__alsa
in /src/llama.cpp/vendor/miniaudio/miniaudio.h:28190
['ma_device*', 'ma_device_config*', 'ma_device_descriptor*', 'ma_device_descriptor*'] ma_result []

Sink functions/methods found for CWE20

Target sink Reached by fuzzer Function call path Possible branch blockers
get ['/src/llama.cpp/fuzzers/fuzz_structurally_created.cpp', '/src/llama.cpp/fuzzers/fuzz_structured.cpp', '/src/llama.cpp/fuzzers/fuzz_load_model.cpp', '/src/llama.cpp/fuzzers/fuzz_tokenizer.cpp', '/src/llama.cpp/fuzzers/fuzz_inference.cpp', '/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp'] N/A
Blocker function Arguments type Return type Constants touched
minja::BinaryOpExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1304
['std::shared_ptr '] Value []
ggml_backend_sycl_graph_compute
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:4008
['ggml_backend_t', 'ggml_cgraph*'] ggml_status []
eval_grad
in /src/llama.cpp/tests/test-backend-ops.cpp:751
['ggml_backend_t', 'char*'] bool []
mtmd_cli_context
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:89
['common_params'] void []
load_media
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:153
['std::string'] bool []
clip_graph
in /src/llama.cpp/tools/mtmd/clip.cpp:444
['clip_ctx*', 'clip_image_f32'] void []
clip_model_loader
in /src/llama.cpp/tools/mtmd/clip.cpp:2004
['char*'] void []
LlamaData::init
in /src/llama.cpp/tools/run/run.cpp:637
['Opt'] int []
&operator[](size_tindex)
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2867
['size_t'] typename std::enable_if ::type []
llama_model::load_tensors
in /src/llama.cpp/src/llama-model.cpp:1536
['llama_model_loader'] bool []
llama_context::decode
in /src/llama.cpp/src/llama-context.cpp:880
['llama_batch'] int []
llama_context::opt_epoch
in /src/llama.cpp/src/llama-context.cpp:2107
['ggml_opt_dataset_t', 'ggml_opt_result_t', 'ggml_opt_result_t', 'int64_t', 'ggml_opt_epoch_callback', 'ggml_opt_epoch_callback'] void []
load_model
in /src/llama.cpp/tools/server/server.cpp:1940
['common_params'] bool []
Java_android_llama_cpp_LLamaAndroid_load_1model
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:83
['JNIEnv*', 'jstring'] JNIEXPORT []
llama_model_load_from_splits
in /src/llama.cpp/src/llama.cpp:251
['char**', 'size_t', 'struct llama_model_params'] struct llama_model []
clip_image_f32_get_img
in /src/llama.cpp/tools/mtmd/clip.cpp:2793
['struct clip_image_f32_batch*', 'int'] clip_image_f32 []
clip_encode_float_image
in /src/llama.cpp/tools/mtmd/clip.cpp:4130
['struct clip_ctx*', 'int', 'float*', 'int', 'int', 'float*'] bool []
eval_message
in /src/llama.cpp/tools/mtmd/mtmd-cli.cpp:199
['mtmd_cli_context', 'common_chat_msg'] int []
process_chunk
in /src/llama.cpp/tools/server/utils.hpp:1309
['llama_context*', 'mtmd_context*', 'llama_pos', 'int32_t', 'llama_pos'] int32_t []
mtmd::nx
in /src/llama.cpp/tools/mtmd/mtmd.h:259
[] uint32_t []
mtmd::ny
in /src/llama.cpp/tools/mtmd/mtmd.h:260
[] uint32_t []
mtmd::data
in /src/llama.cpp/tools/mtmd/mtmd.h:261
[] unsigned char []
mtmd::n_bytes
in /src/llama.cpp/tools/mtmd/mtmd.h:262
[] size_t []
mtmd::id
in /src/llama.cpp/tools/mtmd/mtmd.h:263
[] std::string []
mtmd::set_id
in /src/llama.cpp/tools/mtmd/mtmd.h:264
['char*'] void []
mtmd::c_ptr
in /src/llama.cpp/tools/mtmd/mtmd.h:274
[] std::vector []
*operator[](size_tidx)
in /src/llama.cpp/tools/mtmd/mtmd.h:289
['size_t'] mtmd_input_chunk []
add_media
in /src/llama.cpp/tools/mtmd/mtmd.cpp:470
['mtmd_bitmap*'] int32_t []
check_context_size
in /src/llama.cpp/tools/run/run.cpp:972
['llama_context_ptr', 'llama_batch'] int []
chat_loop
in /src/llama.cpp/tools/run/run.cpp:1195
['LlamaData', 'Opt'] int []
get_tts_version
in /src/llama.cpp/tools/tts/tts.cpp:477
['llama_model*'] outetts_version []
audio_text_from_speaker
in /src/llama.cpp/tools/tts/tts.cpp:499
['json'] std::string []
audio_data_from_speaker
in /src/llama.cpp/tools/tts/tts.cpp:512
['json'] std::string []
params_from_json_cmpl
in /src/llama.cpp/tools/server/server.cpp:243
['llama_context*', 'common_params', 'json'] slot_params []
update_slots
in /src/llama.cpp/tools/server/server.cpp:2961
[] void []
tokenize_input_prompts
in /src/llama.cpp/tools/server/utils.hpp:199
['llama_vocab*', 'json', 'bool', 'bool'] std::vector []
get_common_prefix
in /src/llama.cpp/tools/server/utils.hpp:1254
['server_tokens'] size_t []
validate
in /src/llama.cpp/tools/server/utils.hpp:1286
['struct llama_context*'] bool []
ggml_sycl_op_mul_mat_sycl
in /src/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:2062
['ggml_backend_sycl_context', 'ggml_tensor*', 'ggml_tensor*', 'ggml_tensor*', 'char*', 'float*', 'char*', 'float*', 'int64_t', 'int64_t', 'int64_t', 'int64_t', 'queue_ptr'] void []
dpct::detail::gemm_impl
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:1748
['sycl::queue', 'oneapi::math::transpose', 'oneapi::math::transpose', 'int', 'int', 'int', 'void*', 'void*', 'int', 'void*', 'int', 'void*', 'void*', 'int'] void []
operator[](size_tindex)const
in /src/llama.cpp/ggml/src/ggml-sycl/dpct/helper.hpp:2755
['size_t'] pointer_t []
ggml_backend_blas_graph_compute
in /src/llama.cpp/ggml/src/ggml-blas/ggml-blas.cpp:227
['ggml_backend_t', 'struct ggml_cgraph*'] enum ggml_status []
llama_model::create_memory
in /src/llama.cpp/src/llama-model.cpp:14436
['llama_memory_params', 'llama_cparams'] llama_memory_i []
llama_memory_recurrent::state_read
in /src/llama.cpp/src/llama-memory-recurrent.cpp:719
['llama_io_read_i', 'llama_seq_id'] void []
llama_memory_recurrent::total_size
in /src/llama.cpp/src/llama-memory-recurrent.cpp:648
[] size_t []
llama_model_quantize
in /src/llama.cpp/src/llama-quant.cpp:1037
['char*', 'char*', 'llama_model_quantize_params*'] uint32_t []
Java_android_llama_cpp_LLamaAndroid_new_1context
in /src/llama.cpp/examples/llama.android/llama/src/main/cpp/llama-android.cpp:109
['JNIEnv*', 'jlong'] JNIEXPORT []
llama_context::get_sched
in /src/llama.cpp/src/llama-context.cpp:387
[] ggml_backend_sched_t []
llama_context::get_ctx_compute
in /src/llama.cpp/src/llama-context.cpp:391
[] ggml_context []
llama_context::get_memory
in /src/llama.cpp/src/llama-context.cpp:423
[] llama_memory_t []
llama_context::set_abort_callback
in /src/llama.cpp/src/llama-context.cpp:610
['void*'] void []
llama_context::state_set_data
in /src/llama.cpp/src/llama-context.cpp:1576
['uint8_t*', 'size_t'] size_t []
llama_context::state_load_file
in /src/llama.cpp/src/llama-context.cpp:1616
['char*', 'llama_token*', 'size_t', 'size_t*'] bool []
llama_context::opt_init
in /src/llama.cpp/src/llama-context.cpp:1963
['struct llama_model*', 'struct llama_opt_params'] void []
llama_kv_cache_unified::state_read
in /src/llama.cpp/src/llama-kv-cache-unified.cpp:1391
['llama_io_read_i', 'llama_seq_id'] void []
llama_kv_cache_unified::total_size
in /src/llama.cpp/src/llama-kv-cache-unified.cpp:917
[] size_t []
llama_kv_cache_unified_iswa::get_base
in /src/llama.cpp/src/llama-kv-cache-unified-iswa.cpp:191
[] llama_kv_cache_unified []
llama_kv_cache_unified_iswa::get_swa
in /src/llama.cpp/src/llama-kv-cache-unified-iswa.cpp:195
[] llama_kv_cache_unified []
llama_kv_cache_unified_iswa_context::get_base
in /src/llama.cpp/src/llama-kv-cache-unified-iswa.cpp:269
[] llama_kv_cache_unified_context []
llama_kv_cache_unified_iswa_context::get_swa
in /src/llama.cpp/src/llama-kv-cache-unified-iswa.cpp:275
[] llama_kv_cache_unified_context []
llama_model_loader::llama_model_loader
in /src/llama.cpp/src/llama-model-loader.cpp:468
['std::string', 'std::vector ', 'bool', 'bool', 'llama_model_kv_override*', 'llama_model_tensor_buft_override*'] void []
llama_model_loader::load_all_data
in /src/llama.cpp/src/llama-model-loader.cpp:918
['struct ggml_context*', 'llama_buf_map', 'llama_mlocks*', 'llama_progress_callback', 'void*'] bool []
llama_vocab::impl::tokenize( conststd::string&raw_text, booladd_special, boolparse_special)const
in /src/llama.cpp/src/llama-vocab.cpp:2400
['std::string', 'bool', 'bool'] std::vector []
llama_memory_hybrid::get_mem_attn
in /src/llama.cpp/src/llama-memory-hybrid.cpp:171
[] llama_kv_cache_unified []
llama_memory_hybrid::get_mem_recr
in /src/llama.cpp/src/llama-memory-hybrid.cpp:175
[] llama_memory_recurrent []
llama_memory_hybrid_context::get_attn
in /src/llama.cpp/src/llama-memory-hybrid.cpp:240
[] llama_kv_cache_unified_context []
llama_memory_hybrid_context::get_recr
in /src/llama.cpp/src/llama-memory-hybrid.cpp:244
[] llama_memory_recurrent_context []
test_failure_left_recursion
in /src/llama.cpp/tests/test-grammar-integration.cpp:702
[] void []
test_template_output_parsers
in /src/llama.cpp/tests/test-chat.cpp:552
[] void []
oaicompat_chat_params_parse
in /src/llama.cpp/tools/server/utils.hpp:590
['json', 'oaicompat_parser_options', 'std::vector '] json []
common_params_parse
in /src/llama.cpp/common/arg.cpp:1179
['int', 'char**', 'common_params', 'llama_example'] bool []
common_chat_format_single
in /src/llama.cpp/common/chat.cpp:439
['struct common_chat_templates*', 'std::vector ', 'common_chat_msg', 'bool', 'bool'] std::string []
export_md
in /src/llama.cpp/examples/gen-docs/gen-docs.cpp:50
['std::string', 'llama_example'] void []
*operator->()
in /src/llama.cpp/vendor/cpp-httplib/httplib.h:1208
[] Response []
lexer::scan_string
in /src/llama.cpp/vendor/nlohmann/json.hpp:7277
[] token_type []
lexer::scan_comment
in /src/llama.cpp/vendor/nlohmann/json.hpp:7867
[] bool []
lexer::scan_number
in /src/llama.cpp/vendor/nlohmann/json.hpp:7992
[] token_type []
parser::parser
in /src/llama.cpp/vendor/nlohmann/json.hpp:12929
['InputAdapterType'] void []
parser::parse
in /src/llama.cpp/vendor/nlohmann/json.hpp:12951
['bool', 'BasicJsonType'] void []
(3) boolsax_parse(constinput_format_tformat, json_sax_t*sax_, constboolstrict=true, constcbor_tag_handler_ttag_handler=cbor_tag_handler_t::error)
in /src/llama.cpp/vendor/nlohmann/json.hpp:9882
['input_format_t', 'json_sax_t*'] JSON_HEDLEY_NON_NULL []
test_json_healing
in /src/llama.cpp/tests/test-json-partial.cpp:16
[] void []
ggml_backend_rpc_start_server
in /src/llama.cpp/ggml/src/ggml-rpc/ggml-rpc.cpp:1595
['ggml_backend_t', 'char*', 'char*', 'size_t', 'size_t'] void []
common_chat_msg_parser::consume_json
in /src/llama.cpp/common/chat-parser.cpp:243
[] common_json []
test_json_with_dumped_args
in /src/llama.cpp/tests/test-chat-parser.cpp:211
[] void []
common_chat_msg_parser::consume_json_with_dumped_args
in /src/llama.cpp/common/chat-parser.cpp:250
['std::vector >', 'std::vector >'] common_chat_msg_parser::consume_json_result []
parser::accept
in /src/llama.cpp/vendor/nlohmann/json.hpp:13011
[] bool []
from_cbor
in /src/llama.cpp/vendor/nlohmann/json.hpp:24465
['detail::span_input_adapter'] basic_json []
from_msgpack
in /src/llama.cpp/vendor/nlohmann/json.hpp:24520
['detail::span_input_adapter'] basic_json []
from_ubjson
in /src/llama.cpp/vendor/nlohmann/json.hpp:24574
['detail::span_input_adapter'] basic_json []
from_bson
in /src/llama.cpp/vendor/nlohmann/json.hpp:24658
['detail::span_input_adapter'] basic_json []
minja::chat_template::chat_template
in /src/llama.cpp/vendor/minja/chat-template.hpp:109
['std::string', 'std::string', 'std::string'] void []
operator<(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:341
['Value'] bool []
operator>(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:350
['Value'] bool []
operator==(constValue&other)const
in /src/llama.cpp/vendor/minja/minja.hpp:359
['Value'] bool []
&at(constValue&index)
in /src/llama.cpp/vendor/minja/minja.hpp:419
['Value'] Value []
operator-()const
in /src/llama.cpp/vendor/minja/minja.hpp:454
[] Value []
operator+(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:468
['Value'] Value []
operator-(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:482
['Value'] Value []
operator*(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:488
['Value'] Value []
operator/(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:501
['Value'] Value []
operator%(constValue&rhs)const
in /src/llama.cpp/vendor/minja/minja.hpp:507
['Value'] Value []
Value::get ()const
in /src/llama.cpp/vendor/minja/minja.hpp:544
[] json []
operator()(constminja::Value&v)const
in /src/llama.cpp/vendor/minja/minja.hpp:578
['minja::Value'] size_t []
minja::SetNode::do_render
in /src/llama.cpp/vendor/minja/minja.hpp:1113
['std::shared_ptr '] void []
minja::SubscriptExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1218
['std::shared_ptr '] Value []
minja::MethodCallExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1464
['std::shared_ptr '] Value []
minja::FilterExpr::do_evaluate
in /src/llama.cpp/vendor/minja/minja.hpp:1583
['std::shared_ptr '] Value []

Sink functions/methods found for CWE22

Target sink Reached by fuzzer Function call path Possible branch blockers
write [] Path 1
Path 2
N/A