=== RUN TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath
=== CONT TestAddons/parallel/LocalPath
addons_test.go:1009: (dbg) Run: kubectl --context addons-520986 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:1015: (dbg) Run: kubectl --context addons-520986 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:1019: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (4.676µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:1020: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-520986 -n addons-520986
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-520986 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 logs -n 25: (1.111374618s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-000021 │ download-only-000021 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
│ start │ --download-only -p binary-mirror-212052 --alsologtostderr --binary-mirror http://127.0.0.1:45175 --driver=kvm2 --container-runtime=containerd │ binary-mirror-212052 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ │
│ delete │ -p binary-mirror-212052 │ binary-mirror-212052 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
│ addons │ enable dashboard -p addons-520986 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ │
│ addons │ disable dashboard -p addons-520986 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ │
│ start │ -p addons-520986 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:57 UTC │
│ addons │ addons-520986 addons disable volcano --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ addons-520986 addons disable gcp-auth --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ enable headlamp -p addons-520986 --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ addons-520986 addons disable metrics-server --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable yakd --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable headlamp --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ ip │ addons-520986 ip │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable registry --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ ssh │ addons-520986 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ ip │ addons-520986 ip │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable ingress-dns --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable ingress --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-520986 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable registry-creds --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-520986 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-520986 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/09 01:55:46
Running on machine: ubuntu-20-agent-12
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1209 01:55:46.610879 790270 out.go:360] Setting OutFile to fd 1 ...
I1209 01:55:46.611046 790270 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 01:55:46.611058 790270 out.go:374] Setting ErrFile to fd 2...
I1209 01:55:46.611066 790270 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 01:55:46.611351 790270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 01:55:46.611957 790270 out.go:368] Setting JSON to false
I1209 01:55:46.613003 790270 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":27497,"bootTime":1765217850,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1209 01:55:46.613059 790270 start.go:143] virtualization: kvm guest
I1209 01:55:46.614992 790270 out.go:179] * [addons-520986] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1209 01:55:46.616309 790270 out.go:179] - MINIKUBE_LOCATION=22081
I1209 01:55:46.616318 790270 notify.go:221] Checking for updates...
I1209 01:55:46.617693 790270 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1209 01:55:46.619025 790270 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
I1209 01:55:46.620313 790270 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
I1209 01:55:46.621477 790270 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1209 01:55:46.622714 790270 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1209 01:55:46.624056 790270 driver.go:422] Setting default libvirt URI to qemu:///system
I1209 01:55:46.654512 790270 out.go:179] * Using the kvm2 driver based on user configuration
I1209 01:55:46.655808 790270 start.go:309] selected driver: kvm2
I1209 01:55:46.655826 790270 start.go:927] validating driver "kvm2" against <nil>
I1209 01:55:46.655844 790270 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1209 01:55:46.656615 790270 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1209 01:55:46.656852 790270 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 01:55:46.656881 790270 cni.go:84] Creating CNI manager for ""
I1209 01:55:46.656923 790270 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I1209 01:55:46.656933 790270 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1209 01:55:46.656967 790270 start.go:353] cluster config:
{Name:addons-520986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-520986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
I1209 01:55:46.657069 790270 iso.go:125] acquiring lock: {Name:mk29a40ab0d6eac4567e308b5229766210ecee59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 01:55:46.658537 790270 out.go:179] * Starting "addons-520986" primary control-plane node in "addons-520986" cluster
I1209 01:55:46.659719 790270 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1209 01:55:46.659758 790270 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
I1209 01:55:46.659771 790270 cache.go:65] Caching tarball of preloaded images
I1209 01:55:46.659886 790270 preload.go:238] Found /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I1209 01:55:46.659902 790270 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
I1209 01:55:46.660286 790270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/config.json ...
I1209 01:55:46.660316 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/config.json: {Name:mk463a364962037a7aec4eadbec0594317e59ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:55:46.660485 790270 start.go:360] acquireMachinesLock for addons-520986: {Name:mk20d7a910149185835b082cbce91d316616a54e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1209 01:55:46.664826 790270 start.go:364] duration metric: took 4.320734ms to acquireMachinesLock for "addons-520986"
I1209 01:55:46.664861 790270 start.go:93] Provisioning new machine with config: &{Name:addons-520986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-520986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1209 01:55:46.664932 790270 start.go:125] createHost starting for "" (driver="kvm2")
I1209 01:55:46.666460 790270 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1209 01:55:46.666658 790270 start.go:159] libmachine.API.Create for "addons-520986" (driver="kvm2")
I1209 01:55:46.666687 790270 client.go:173] LocalClient.Create starting
I1209 01:55:46.666782 790270 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem
I1209 01:55:46.869161 790270 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/cert.pem
I1209 01:55:46.913708 790270 main.go:143] libmachine: creating domain...
I1209 01:55:46.913734 790270 main.go:143] libmachine: creating network...
I1209 01:55:46.915344 790270 main.go:143] libmachine: found existing default network
I1209 01:55:46.915598 790270 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1209 01:55:46.916312 790270 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fea830}
I1209 01:55:46.916480 790270 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-520986</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1209 01:55:46.922242 790270 main.go:143] libmachine: creating private network mk-addons-520986 192.168.39.0/24...
I1209 01:55:46.994407 790270 main.go:143] libmachine: private network mk-addons-520986 192.168.39.0/24 created
I1209 01:55:46.994727 790270 main.go:143] libmachine: <network>
<name>mk-addons-520986</name>
<uuid>66b68d57-147c-423c-94b4-2860291daa67</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:0f:22:b7'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1209 01:55:46.994763 790270 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986 ...
I1209 01:55:46.994788 790270 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22081-785489/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
I1209 01:55:46.994800 790270 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22081-785489/.minikube
I1209 01:55:46.994902 790270 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22081-785489/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22081-785489/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
I1209 01:55:47.259552 790270 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa...
I1209 01:55:47.451640 790270 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/addons-520986.rawdisk...
I1209 01:55:47.451696 790270 main.go:143] libmachine: Writing magic tar header
I1209 01:55:47.451747 790270 main.go:143] libmachine: Writing SSH key tar header
I1209 01:55:47.451867 790270 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986 ...
I1209 01:55:47.451944 790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986
I1209 01:55:47.451982 790270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986 (perms=drwx------)
I1209 01:55:47.452000 790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-785489/.minikube/machines
I1209 01:55:47.452017 790270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-785489/.minikube/machines (perms=drwxr-xr-x)
I1209 01:55:47.452034 790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-785489/.minikube
I1209 01:55:47.452047 790270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-785489/.minikube (perms=drwxr-xr-x)
I1209 01:55:47.452065 790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-785489
I1209 01:55:47.452078 790270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-785489 (perms=drwxrwxr-x)
I1209 01:55:47.452094 790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1209 01:55:47.452115 790270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1209 01:55:47.452146 790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1209 01:55:47.452159 790270 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1209 01:55:47.452171 790270 main.go:143] libmachine: checking permissions on dir: /home
I1209 01:55:47.452185 790270 main.go:143] libmachine: skipping /home - not owner
I1209 01:55:47.452191 790270 main.go:143] libmachine: defining domain...
I1209 01:55:47.453672 790270 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-520986</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/addons-520986.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-520986'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1209 01:55:47.458941 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:c7:27:4e in network default
I1209 01:55:47.459508 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:47.459524 790270 main.go:143] libmachine: starting domain...
I1209 01:55:47.459528 790270 main.go:143] libmachine: ensuring networks are active...
I1209 01:55:47.460454 790270 main.go:143] libmachine: Ensuring network default is active
I1209 01:55:47.460840 790270 main.go:143] libmachine: Ensuring network mk-addons-520986 is active
I1209 01:55:47.461495 790270 main.go:143] libmachine: getting domain XML...
I1209 01:55:47.462574 790270 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-520986</name>
<uuid>c1934cbe-8219-4512-9b02-72a0810d6e14</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/addons-520986.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:09:0b:7a'/>
<source network='mk-addons-520986'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:c7:27:4e'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1209 01:55:48.728187 790270 main.go:143] libmachine: waiting for domain to start...
I1209 01:55:48.729684 790270 main.go:143] libmachine: domain is now running
I1209 01:55:48.729707 790270 main.go:143] libmachine: waiting for IP...
I1209 01:55:48.730521 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:48.731245 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:48.731264 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:48.731525 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:48.731612 790270 retry.go:31] will retry after 301.745583ms: waiting for domain to come up
I1209 01:55:49.035367 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:49.036121 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:49.036153 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:49.036650 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:49.036706 790270 retry.go:31] will retry after 286.232228ms: waiting for domain to come up
I1209 01:55:49.324213 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:49.325170 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:49.325187 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:49.325568 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:49.325638 790270 retry.go:31] will retry after 330.013419ms: waiting for domain to come up
I1209 01:55:49.657466 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:49.658530 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:49.658552 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:49.658904 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:49.658943 790270 retry.go:31] will retry after 428.77108ms: waiting for domain to come up
I1209 01:55:50.089689 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:50.090440 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:50.090456 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:50.090834 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:50.090878 790270 retry.go:31] will retry after 657.210018ms: waiting for domain to come up
I1209 01:55:50.749853 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:50.750838 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:50.750860 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:50.751269 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:50.751316 790270 retry.go:31] will retry after 833.998265ms: waiting for domain to come up
I1209 01:55:51.587393 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:51.588051 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:51.588067 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:51.588389 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:51.588423 790270 retry.go:31] will retry after 1.135020025s: waiting for domain to come up
I1209 01:55:52.724811 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:52.725426 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:52.725446 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:52.725924 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:52.725975 790270 retry.go:31] will retry after 1.455514481s: waiting for domain to come up
I1209 01:55:54.183732 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:54.184417 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:54.184438 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:54.184796 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:54.184837 790270 retry.go:31] will retry after 1.286485281s: waiting for domain to come up
I1209 01:55:55.473478 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:55.474294 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:55.474316 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:55.474698 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:55.474747 790270 retry.go:31] will retry after 1.434846567s: waiting for domain to come up
I1209 01:55:56.911490 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:56.912405 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:56.912431 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:56.912815 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:56.912873 790270 retry.go:31] will retry after 2.620673714s: waiting for domain to come up
I1209 01:55:59.536454 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:55:59.537336 790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
I1209 01:55:59.537353 790270 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:59.537815 790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
I1209 01:55:59.537855 790270 retry.go:31] will retry after 3.559268644s: waiting for domain to come up
I1209 01:56:03.099218 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.099958 790270 main.go:143] libmachine: domain addons-520986 has current primary IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.099982 790270 main.go:143] libmachine: found domain IP: 192.168.39.56
I1209 01:56:03.099991 790270 main.go:143] libmachine: reserving static IP address...
I1209 01:56:03.100497 790270 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-520986", mac: "52:54:00:09:0b:7a", ip: "192.168.39.56"} in network mk-addons-520986
I1209 01:56:03.297793 790270 main.go:143] libmachine: reserved static IP address 192.168.39.56 for domain addons-520986
I1209 01:56:03.297824 790270 main.go:143] libmachine: waiting for SSH...
I1209 01:56:03.297834 790270 main.go:143] libmachine: Getting to WaitForSSH function...
I1209 01:56:03.301739 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.302287 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:minikube Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:03.302317 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.302500 790270 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:03.302731 790270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.56 22 <nil> <nil>}
I1209 01:56:03.302749 790270 main.go:143] libmachine: About to run SSH command:
exit 0
I1209 01:56:03.414057 790270 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1209 01:56:03.414513 790270 main.go:143] libmachine: domain creation complete
I1209 01:56:03.416083 790270 machine.go:94] provisionDockerMachine start ...
I1209 01:56:03.418831 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.419268 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:03.419289 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.419436 790270 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:03.419692 790270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.56 22 <nil> <nil>}
I1209 01:56:03.419708 790270 main.go:143] libmachine: About to run SSH command:
hostname
I1209 01:56:03.530029 790270 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1209 01:56:03.530076 790270 buildroot.go:166] provisioning hostname "addons-520986"
I1209 01:56:03.533410 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.533919 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:03.533952 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.534141 790270 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:03.534381 790270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.56 22 <nil> <nil>}
I1209 01:56:03.534397 790270 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-520986 && echo "addons-520986" | sudo tee /etc/hostname
I1209 01:56:03.664001 790270 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-520986
I1209 01:56:03.667045 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.667510 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:03.667537 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.667731 790270 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:03.667965 790270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.56 22 <nil> <nil>}
I1209 01:56:03.667982 790270 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-520986' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-520986/g' /etc/hosts;
else
echo '127.0.1.1 addons-520986' | sudo tee -a /etc/hosts;
fi
fi
I1209 01:56:03.792735 790270 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1209 01:56:03.792766 790270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22081-785489/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-785489/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-785489/.minikube}
I1209 01:56:03.792835 790270 buildroot.go:174] setting up certificates
I1209 01:56:03.792853 790270 provision.go:84] configureAuth start
I1209 01:56:03.796087 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.796672 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:03.796703 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.799506 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.799913 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:03.799940 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.800113 790270 provision.go:143] copyHostCerts
I1209 01:56:03.800218 790270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-785489/.minikube/ca.pem (1078 bytes)
I1209 01:56:03.800389 790270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-785489/.minikube/cert.pem (1123 bytes)
I1209 01:56:03.800475 790270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-785489/.minikube/key.pem (1675 bytes)
I1209 01:56:03.800540 790270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-785489/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca-key.pem org=jenkins.addons-520986 san=[127.0.0.1 192.168.39.56 addons-520986 localhost minikube]
I1209 01:56:03.832656 790270 provision.go:177] copyRemoteCerts
I1209 01:56:03.832718 790270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1209 01:56:03.835172 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.835522 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:03.835545 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:03.835701 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:03.922765 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1209 01:56:03.951020 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1209 01:56:03.978975 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1209 01:56:04.007889 790270 provision.go:87] duration metric: took 215.015763ms to configureAuth
I1209 01:56:04.007920 790270 buildroot.go:189] setting minikube options for container-runtime
I1209 01:56:04.008108 790270 config.go:182] Loaded profile config "addons-520986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 01:56:04.008119 790270 machine.go:97] duration metric: took 592.016858ms to provisionDockerMachine
I1209 01:56:04.008126 790270 client.go:176] duration metric: took 17.341434216s to LocalClient.Create
I1209 01:56:04.008166 790270 start.go:167] duration metric: took 17.341508459s to libmachine.API.Create "addons-520986"
I1209 01:56:04.008179 790270 start.go:293] postStartSetup for "addons-520986" (driver="kvm2")
I1209 01:56:04.008189 790270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1209 01:56:04.008247 790270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1209 01:56:04.010896 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.011299 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:04.011331 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.011518 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:04.098859 790270 ssh_runner.go:195] Run: cat /etc/os-release
I1209 01:56:04.104319 790270 info.go:137] Remote host: Buildroot 2025.02
I1209 01:56:04.104363 790270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-785489/.minikube/addons for local assets ...
I1209 01:56:04.104433 790270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-785489/.minikube/files for local assets ...
I1209 01:56:04.104458 790270 start.go:296] duration metric: took 96.272363ms for postStartSetup
I1209 01:56:04.107612 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.108072 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:04.108096 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.108333 790270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/config.json ...
I1209 01:56:04.108558 790270 start.go:128] duration metric: took 17.443612929s to createHost
I1209 01:56:04.110845 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.111207 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:04.111241 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.111428 790270 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:04.111680 790270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.56 22 <nil> <nil>}
I1209 01:56:04.111691 790270 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1209 01:56:04.224888 790270 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765245364.183702467
I1209 01:56:04.224916 790270 fix.go:216] guest clock: 1765245364.183702467
I1209 01:56:04.224929 790270 fix.go:229] Guest: 2025-12-09 01:56:04.183702467 +0000 UTC Remote: 2025-12-09 01:56:04.108573478 +0000 UTC m=+17.546947163 (delta=75.128989ms)
I1209 01:56:04.224946 790270 fix.go:200] guest clock delta is within tolerance: 75.128989ms
I1209 01:56:04.224952 790270 start.go:83] releasing machines lock for "addons-520986", held for 17.560105488s
I1209 01:56:04.228231 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.228724 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:04.228763 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.229380 790270 ssh_runner.go:195] Run: cat /version.json
I1209 01:56:04.229510 790270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1209 01:56:04.232240 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.232458 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.232734 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:04.232774 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.232997 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:04.233022 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:04.233042 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:04.233264 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:04.341562 790270 ssh_runner.go:195] Run: systemctl --version
I1209 01:56:04.347979 790270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1209 01:56:04.354408 790270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1209 01:56:04.354481 790270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1209 01:56:04.374477 790270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1209 01:56:04.374501 790270 start.go:496] detecting cgroup driver to use...
I1209 01:56:04.374581 790270 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1209 01:56:04.406466 790270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1209 01:56:04.422568 790270 docker.go:218] disabling cri-docker service (if available) ...
I1209 01:56:04.422630 790270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1209 01:56:04.440028 790270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1209 01:56:04.456080 790270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1209 01:56:04.600497 790270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1209 01:56:04.812027 790270 docker.go:234] disabling docker service ...
I1209 01:56:04.812096 790270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1209 01:56:04.829739 790270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1209 01:56:04.846377 790270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1209 01:56:05.009047 790270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1209 01:56:05.158428 790270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1209 01:56:05.174807 790270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1209 01:56:05.198468 790270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1209 01:56:05.212840 790270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1209 01:56:05.227227 790270 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1209 01:56:05.227304 790270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1209 01:56:05.240326 790270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1209 01:56:05.252984 790270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1209 01:56:05.266394 790270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1209 01:56:05.279077 790270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1209 01:56:05.293715 790270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1209 01:56:05.307344 790270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1209 01:56:05.319706 790270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1209 01:56:05.334271 790270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1209 01:56:05.348371 790270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1209 01:56:05.348440 790270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1209 01:56:05.371313 790270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1209 01:56:05.383686 790270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 01:56:05.522835 790270 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1209 01:56:05.564735 790270 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1209 01:56:05.564834 790270 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1209 01:56:05.570831 790270 retry.go:31] will retry after 971.714769ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I1209 01:56:06.543044 790270 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1209 01:56:06.549461 790270 start.go:564] Will wait 60s for crictl version
I1209 01:56:06.549552 790270 ssh_runner.go:195] Run: which crictl
I1209 01:56:06.554041 790270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1209 01:56:06.587556 790270 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.1.4
RuntimeApiVersion: v1
I1209 01:56:06.587649 790270 ssh_runner.go:195] Run: containerd --version
I1209 01:56:06.609435 790270 ssh_runner.go:195] Run: containerd --version
I1209 01:56:06.632744 790270 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.1.4 ...
I1209 01:56:06.637392 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:06.637810 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:06.637834 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:06.638074 790270 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1209 01:56:06.643262 790270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 01:56:06.659403 790270 kubeadm.go:884] updating cluster {Name:addons-520986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-520986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1209 01:56:06.659576 790270 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1209 01:56:06.659655 790270 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 01:56:06.690826 790270 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1209 01:56:06.690913 790270 ssh_runner.go:195] Run: which lz4
I1209 01:56:06.695296 790270 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1209 01:56:06.700114 790270 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1209 01:56:06.700161 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (339763354 bytes)
I1209 01:56:07.985220 790270 containerd.go:563] duration metric: took 1.289978143s to copy over tarball
I1209 01:56:07.985302 790270 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1209 01:56:09.445921 790270 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.460587753s)
I1209 01:56:09.445956 790270 containerd.go:570] duration metric: took 1.460704454s to extract the tarball
I1209 01:56:09.445966 790270 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1209 01:56:09.487466 790270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 01:56:09.648199 790270 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1209 01:56:09.701644 790270 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 01:56:09.730620 790270 retry.go:31] will retry after 126.27895ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2025-12-09T01:56:09Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I1209 01:56:09.858060 790270 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 01:56:09.886334 790270 retry.go:31] will retry after 424.832912ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2025-12-09T01:56:09Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I1209 01:56:10.312118 790270 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 01:56:10.339693 790270 retry.go:31] will retry after 484.563011ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2025-12-09T01:56:10Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I1209 01:56:10.824419 790270 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 01:56:10.850102 790270 retry.go:31] will retry after 589.37792ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2025-12-09T01:56:10Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I1209 01:56:11.439968 790270 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 01:56:11.468075 790270 retry.go:31] will retry after 813.68456ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2025-12-09T01:56:11Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I1209 01:56:12.282326 790270 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 01:56:12.317343 790270 containerd.go:627] all images are preloaded for containerd runtime.
I1209 01:56:12.317376 790270 cache_images.go:86] Images are preloaded, skipping loading
I1209 01:56:12.317393 790270 kubeadm.go:935] updating node { 192.168.39.56 8443 v1.34.2 containerd true true} ...
I1209 01:56:12.317526 790270 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-520986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-520986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1209 01:56:12.317595 790270 ssh_runner.go:195] Run: sudo crictl info
I1209 01:56:12.349483 790270 cni.go:84] Creating CNI manager for ""
I1209 01:56:12.349509 790270 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I1209 01:56:12.349528 790270 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1209 01:56:12.349554 790270 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-520986 NodeName:addons-520986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1209 01:56:12.349687 790270 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.56
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "addons-520986"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.56"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1209 01:56:12.349783 790270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1209 01:56:12.362577 790270 binaries.go:51] Found k8s binaries, skipping transfer
I1209 01:56:12.362652 790270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1209 01:56:12.374618 790270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
I1209 01:56:12.396009 790270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1209 01:56:12.416092 790270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
I1209 01:56:12.436476 790270 ssh_runner.go:195] Run: grep 192.168.39.56 control-plane.minikube.internal$ /etc/hosts
I1209 01:56:12.441066 790270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.56 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 01:56:12.455877 790270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 01:56:12.591465 790270 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 01:56:12.611175 790270 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986 for IP: 192.168.39.56
I1209 01:56:12.611201 790270 certs.go:195] generating shared ca certs ...
I1209 01:56:12.611225 790270 certs.go:227] acquiring lock for ca certs: {Name:mk11c7b39a751cc374cf1934fc2b19c48b37e451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:12.612106 790270 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-785489/.minikube/ca.key
I1209 01:56:12.717616 790270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt ...
I1209 01:56:12.717656 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt: {Name:mk3c1e8d6ffe211e2671c48707faf8e00f4bdfdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:12.718573 790270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-785489/.minikube/ca.key ...
I1209 01:56:12.718611 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/ca.key: {Name:mk3bfe1a0273ff33aafb468c655fc6f6c7cb7e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:12.719266 790270 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.key
I1209 01:56:12.773171 790270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.crt ...
I1209 01:56:12.773206 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.crt: {Name:mkd07f91f87087eb7f45edc8239dd6bb28ef0ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:12.774208 790270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.key ...
I1209 01:56:12.774240 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.key: {Name:mk45152197778e8fc7475822cf1b22c0f6930e5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:12.780328 790270 certs.go:257] generating profile certs ...
I1209 01:56:12.780433 790270 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.key
I1209 01:56:12.780456 790270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt with IP's: []
I1209 01:56:12.885339 790270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt ...
I1209 01:56:12.885375 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: {Name:mkda60a8b805fd51a3e5a7f872d0d54e37aec82e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:12.886444 790270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.key ...
I1209 01:56:12.886469 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.key: {Name:mk766b64f28c7278b4ccbf6b51f6b04776f69cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:12.886596 790270 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key.a27b7307
I1209 01:56:12.886621 790270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt.a27b7307 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56]
I1209 01:56:13.101574 790270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt.a27b7307 ...
I1209 01:56:13.101611 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt.a27b7307: {Name:mk3b9116a9247d2a26910be7cd2e57868f7f8ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:13.101791 790270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key.a27b7307 ...
I1209 01:56:13.101804 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key.a27b7307: {Name:mkf3e2fd911864f34dd44598c3b837bc37d8a606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:13.101879 790270 certs.go:382] copying /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt.a27b7307 -> /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt
I1209 01:56:13.101974 790270 certs.go:386] copying /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key.a27b7307 -> /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key
I1209 01:56:13.102021 790270 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.key
I1209 01:56:13.102040 790270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.crt with IP's: []
I1209 01:56:13.162823 790270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.crt ...
I1209 01:56:13.162853 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.crt: {Name:mk527dd9917c037e6b7b6e09620ff9010fdb7478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:13.163838 790270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.key ...
I1209 01:56:13.163870 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.key: {Name:mk012b14cbed5aea3498605aac03a6c7a0c5f8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:13.164064 790270 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca-key.pem (1679 bytes)
I1209 01:56:13.164114 790270 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem (1078 bytes)
I1209 01:56:13.164156 790270 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/cert.pem (1123 bytes)
I1209 01:56:13.164181 790270 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/key.pem (1675 bytes)
I1209 01:56:13.164765 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1209 01:56:13.196720 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1209 01:56:13.225933 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1209 01:56:13.255269 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1209 01:56:13.287240 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1209 01:56:13.318402 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1209 01:56:13.348599 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1209 01:56:13.377320 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1209 01:56:13.405727 790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1209 01:56:13.434160 790270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1209 01:56:13.454184 790270 ssh_runner.go:195] Run: openssl version
I1209 01:56:13.460454 790270 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1209 01:56:13.471760 790270 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1209 01:56:13.483044 790270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1209 01:56:13.488474 790270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 9 01:56 /usr/share/ca-certificates/minikubeCA.pem
I1209 01:56:13.488547 790270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1209 01:56:13.495758 790270 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1209 01:56:13.507195 790270 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1209 01:56:13.518417 790270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1209 01:56:13.522993 790270 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1209 01:56:13.523055 790270 kubeadm.go:401] StartCluster: {Name:addons-520986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-520986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 01:56:13.523167 790270 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1209 01:56:13.523224 790270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1209 01:56:13.556083 790270 cri.go:89] found id: ""
I1209 01:56:13.556192 790270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1209 01:56:13.568252 790270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1209 01:56:13.579746 790270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1209 01:56:13.591195 790270 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1209 01:56:13.591212 790270 kubeadm.go:158] found existing configuration files:
I1209 01:56:13.591253 790270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1209 01:56:13.601777 790270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1209 01:56:13.601832 790270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1209 01:56:13.613104 790270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1209 01:56:13.624152 790270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1209 01:56:13.624225 790270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1209 01:56:13.636039 790270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1209 01:56:13.646291 790270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1209 01:56:13.646351 790270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1209 01:56:13.657345 790270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1209 01:56:13.667792 790270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1209 01:56:13.667840 790270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1209 01:56:13.679049 790270 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1209 01:56:13.728881 790270 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1209 01:56:13.728964 790270 kubeadm.go:319] [preflight] Running pre-flight checks
I1209 01:56:13.828471 790270 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1209 01:56:13.828586 790270 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1209 01:56:13.828682 790270 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1209 01:56:13.837302 790270 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1209 01:56:13.840407 790270 out.go:252] - Generating certificates and keys ...
I1209 01:56:13.840487 790270 kubeadm.go:319] [certs] Using existing ca certificate authority
I1209 01:56:13.840555 790270 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1209 01:56:14.371848 790270 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1209 01:56:14.678008 790270 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1209 01:56:14.944951 790270 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1209 01:56:15.341403 790270 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1209 01:56:15.534966 790270 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1209 01:56:15.535085 790270 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-520986 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
I1209 01:56:15.589613 790270 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1209 01:56:15.589781 790270 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-520986 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
I1209 01:56:15.786020 790270 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1209 01:56:15.924653 790270 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1209 01:56:16.047225 790270 kubeadm.go:319] [certs] Generating "sa" key and public key
I1209 01:56:16.047416 790270 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1209 01:56:16.456283 790270 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1209 01:56:16.646808 790270 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1209 01:56:16.774703 790270 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1209 01:56:17.611265 790270 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1209 01:56:17.816990 790270 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1209 01:56:17.817398 790270 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1209 01:56:17.819583 790270 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1209 01:56:17.821479 790270 out.go:252] - Booting up control plane ...
I1209 01:56:17.821597 790270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1209 01:56:17.821741 790270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1209 01:56:17.821877 790270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1209 01:56:17.844021 790270 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1209 01:56:17.844165 790270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1209 01:56:17.851307 790270 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1209 01:56:17.853267 790270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1209 01:56:17.853326 790270 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1209 01:56:18.011071 790270 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1209 01:56:18.011402 790270 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1209 01:56:19.012244 790270 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00184424s
I1209 01:56:19.015108 790270 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1209 01:56:19.015220 790270 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.56:8443/livez
I1209 01:56:19.015300 790270 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1209 01:56:19.015364 790270 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1209 01:56:21.319027 790270 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.305413868s
I1209 01:56:22.202478 790270 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.189938581s
I1209 01:56:24.012813 790270 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001548092s
I1209 01:56:24.039146 790270 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1209 01:56:24.053804 790270 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1209 01:56:24.068616 790270 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1209 01:56:24.068844 790270 kubeadm.go:319] [mark-control-plane] Marking the node addons-520986 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1209 01:56:24.080147 790270 kubeadm.go:319] [bootstrap-token] Using token: iz5njm.08avfkkb65ug1lvs
I1209 01:56:24.081418 790270 out.go:252] - Configuring RBAC rules ...
I1209 01:56:24.081531 790270 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1209 01:56:24.091450 790270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1209 01:56:24.100831 790270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1209 01:56:24.104094 790270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1209 01:56:24.107269 790270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1209 01:56:24.110706 790270 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1209 01:56:24.419812 790270 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1209 01:56:24.866699 790270 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1209 01:56:25.418911 790270 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1209 01:56:25.419917 790270 kubeadm.go:319]
I1209 01:56:25.420015 790270 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1209 01:56:25.420061 790270 kubeadm.go:319]
I1209 01:56:25.420182 790270 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1209 01:56:25.420192 790270 kubeadm.go:319]
I1209 01:56:25.420226 790270 kubeadm.go:319] mkdir -p $HOME/.kube
I1209 01:56:25.420315 790270 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1209 01:56:25.420413 790270 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1209 01:56:25.420438 790270 kubeadm.go:319]
I1209 01:56:25.420527 790270 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1209 01:56:25.420537 790270 kubeadm.go:319]
I1209 01:56:25.420601 790270 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1209 01:56:25.420609 790270 kubeadm.go:319]
I1209 01:56:25.420676 790270 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1209 01:56:25.420787 790270 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1209 01:56:25.420890 790270 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1209 01:56:25.420899 790270 kubeadm.go:319]
I1209 01:56:25.421043 790270 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1209 01:56:25.421150 790270 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1209 01:56:25.421158 790270 kubeadm.go:319]
I1209 01:56:25.421229 790270 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token iz5njm.08avfkkb65ug1lvs \
I1209 01:56:25.421319 790270 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:c11fe7a294546fd865e9cf4259a0b816aae73d916f65cf1122876c70c9af5892 \
I1209 01:56:25.421339 790270 kubeadm.go:319] --control-plane
I1209 01:56:25.421343 790270 kubeadm.go:319]
I1209 01:56:25.421431 790270 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1209 01:56:25.421439 790270 kubeadm.go:319]
I1209 01:56:25.421528 790270 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token iz5njm.08avfkkb65ug1lvs \
I1209 01:56:25.421643 790270 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:c11fe7a294546fd865e9cf4259a0b816aae73d916f65cf1122876c70c9af5892
I1209 01:56:25.423681 790270 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1209 01:56:25.423722 790270 cni.go:84] Creating CNI manager for ""
I1209 01:56:25.423738 790270 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I1209 01:56:25.425355 790270 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1209 01:56:25.426565 790270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1209 01:56:25.440377 790270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1209 01:56:25.467210 790270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1209 01:56:25.467293 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:25.467294 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-520986 minikube.k8s.io/updated_at=2025_12_09T01_56_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=addons-520986 minikube.k8s.io/primary=true
I1209 01:56:25.486538 790270 ops.go:34] apiserver oom_adj: -16
I1209 01:56:25.607098 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:26.107754 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:26.607283 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:27.107393 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:27.608126 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:28.107785 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:28.607608 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:29.108191 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:29.607759 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:30.107593 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:30.607410 790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:30.744750 790270 kubeadm.go:1114] duration metric: took 5.277526181s to wait for elevateKubeSystemPrivileges
I1209 01:56:30.744822 790270 kubeadm.go:403] duration metric: took 17.221768805s to StartCluster
I1209 01:56:30.744852 790270 settings.go:142] acquiring lock: {Name:mke007a994b1310d493b4df603715fb4b029e8ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:30.745603 790270 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22081-785489/kubeconfig
I1209 01:56:30.746292 790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/kubeconfig: {Name:mk11cf9ad80d3da3c3f1920bc8be0a3badb85306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:30.747048 790270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1209 01:56:30.747114 790270 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1209 01:56:30.747199 790270 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1209 01:56:30.747350 790270 config.go:182] Loaded profile config "addons-520986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 01:56:30.747367 790270 addons.go:70] Setting cloud-spanner=true in profile "addons-520986"
I1209 01:56:30.747371 790270 addons.go:70] Setting gcp-auth=true in profile "addons-520986"
I1209 01:56:30.747356 790270 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-520986"
I1209 01:56:30.747394 790270 mustload.go:66] Loading cluster: addons-520986
I1209 01:56:30.747356 790270 addons.go:70] Setting yakd=true in profile "addons-520986"
I1209 01:56:30.747407 790270 addons.go:239] Setting addon cloud-spanner=true in "addons-520986"
I1209 01:56:30.747413 790270 addons.go:70] Setting ingress=true in profile "addons-520986"
I1209 01:56:30.747425 790270 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-520986"
I1209 01:56:30.747437 790270 addons.go:239] Setting addon ingress=true in "addons-520986"
I1209 01:56:30.747445 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.747436 790270 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-520986"
I1209 01:56:30.747456 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.747475 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.747476 790270 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-520986"
I1209 01:56:30.747479 790270 addons.go:70] Setting volcano=true in profile "addons-520986"
I1209 01:56:30.747503 790270 addons.go:239] Setting addon volcano=true in "addons-520986"
I1209 01:56:30.747530 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.747602 790270 config.go:182] Loaded profile config "addons-520986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 01:56:30.747350 790270 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-520986"
I1209 01:56:30.748259 790270 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-520986"
I1209 01:56:30.748297 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.748398 790270 addons.go:70] Setting volumesnapshots=true in profile "addons-520986"
I1209 01:56:30.748413 790270 addons.go:239] Setting addon volumesnapshots=true in "addons-520986"
I1209 01:56:30.748434 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.748441 790270 addons.go:70] Setting inspektor-gadget=true in profile "addons-520986"
I1209 01:56:30.748460 790270 addons.go:239] Setting addon inspektor-gadget=true in "addons-520986"
I1209 01:56:30.747422 790270 addons.go:239] Setting addon yakd=true in "addons-520986"
I1209 01:56:30.748513 790270 addons.go:70] Setting ingress-dns=true in profile "addons-520986"
I1209 01:56:30.748535 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.748547 790270 addons.go:70] Setting storage-provisioner=true in profile "addons-520986"
I1209 01:56:30.748549 790270 addons.go:239] Setting addon ingress-dns=true in "addons-520986"
I1209 01:56:30.748560 790270 addons.go:239] Setting addon storage-provisioner=true in "addons-520986"
I1209 01:56:30.748575 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.748584 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.749028 790270 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-520986"
I1209 01:56:30.749050 790270 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-520986"
I1209 01:56:30.749078 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.747363 790270 addons.go:70] Setting default-storageclass=true in profile "addons-520986"
I1209 01:56:30.749267 790270 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-520986"
I1209 01:56:30.747394 790270 addons.go:70] Setting registry=true in profile "addons-520986"
I1209 01:56:30.749311 790270 addons.go:239] Setting addon registry=true in "addons-520986"
I1209 01:56:30.749339 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.748537 790270 addons.go:70] Setting registry-creds=true in profile "addons-520986"
I1209 01:56:30.749375 790270 addons.go:239] Setting addon registry-creds=true in "addons-520986"
I1209 01:56:30.749410 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.748493 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.749556 790270 addons.go:70] Setting metrics-server=true in profile "addons-520986"
I1209 01:56:30.749572 790270 addons.go:239] Setting addon metrics-server=true in "addons-520986"
I1209 01:56:30.749595 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.750121 790270 out.go:179] * Verifying Kubernetes components...
I1209 01:56:30.751748 790270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 01:56:30.754313 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.755383 790270 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-520986"
I1209 01:56:30.755413 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.756443 790270 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1209 01:56:30.756476 790270 out.go:179] - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
I1209 01:56:30.757365 790270 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1209 01:56:30.757368 790270 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1209 01:56:30.758240 790270 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1209 01:56:30.758276 790270 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1209 01:56:30.758327 790270 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1209 01:56:30.758865 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1209 01:56:30.758956 790270 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1209 01:56:30.759021 790270 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1209 01:56:30.759076 790270 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1209 01:56:30.759415 790270 addons.go:239] Setting addon default-storageclass=true in "addons-520986"
I1209 01:56:30.759950 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:30.759971 790270 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1209 01:56:30.760003 790270 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1209 01:56:30.760311 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1209 01:56:30.760002 790270 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1209 01:56:30.760355 790270 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1209 01:56:30.760034 790270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1209 01:56:30.760620 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1209 01:56:30.760036 790270 out.go:179] - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
I1209 01:56:30.760712 790270 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1209 01:56:30.760723 790270 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1209 01:56:30.760732 790270 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1209 01:56:30.760748 790270 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1209 01:56:30.760756 790270 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1209 01:56:30.760769 790270 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1209 01:56:30.760802 790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1209 01:56:30.761937 790270 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1209 01:56:30.760837 790270 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1209 01:56:30.762043 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1209 01:56:30.761375 790270 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1209 01:56:30.761461 790270 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1209 01:56:30.762262 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1209 01:56:30.762980 790270 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1209 01:56:30.763261 790270 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1209 01:56:30.763633 790270 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1209 01:56:30.764059 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1209 01:56:30.763649 790270 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1209 01:56:30.764121 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1209 01:56:30.763835 790270 out.go:179] - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
I1209 01:56:30.764441 790270 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1209 01:56:30.764447 790270 out.go:179] - Using image docker.io/registry:3.0.0
I1209 01:56:30.764908 790270 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1209 01:56:30.765467 790270 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1209 01:56:30.765703 790270 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1209 01:56:30.765795 790270 out.go:179] - Using image docker.io/busybox:stable
I1209 01:56:30.766434 790270 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1209 01:56:30.766453 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1209 01:56:30.766839 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.767072 790270 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1209 01:56:30.767148 790270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1209 01:56:30.767161 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1209 01:56:30.767292 790270 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1209 01:56:30.767316 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1209 01:56:30.769055 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.769113 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.769553 790270 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
I1209 01:56:30.769577 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
I1209 01:56:30.770894 790270 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1209 01:56:30.771959 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.772718 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.773470 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.774073 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.774178 790270 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1209 01:56:30.774664 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.774706 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.775210 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.775244 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.775628 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.775706 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.775745 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.775941 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.776005 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.776496 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.776791 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.776936 790270 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1209 01:56:30.777195 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.777393 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.777427 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.778030 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.778053 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.778065 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.778340 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.778720 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.779017 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.779052 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.779179 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.779300 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.779404 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.779444 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.779500 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.779788 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.779971 790270 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1209 01:56:30.780006 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.780464 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.780507 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.780619 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.780654 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.780702 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.780729 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.780899 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.781022 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.781173 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.781290 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.781497 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.781361 790270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1209 01:56:30.781573 790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1209 01:56:30.781600 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.781762 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.781795 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.781826 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.782232 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.782231 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.782282 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.782492 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.782528 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.782549 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.782799 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.782833 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.782946 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.783420 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:30.785121 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.785671 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:30.785701 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:30.785928 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
W1209 01:56:30.885803 790270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55856->192.168.39.56:22: read: connection reset by peer
I1209 01:56:30.885852 790270 retry.go:31] will retry after 315.630491ms: ssh: handshake failed: read tcp 192.168.39.1:55856->192.168.39.56:22: read: connection reset by peer
W1209 01:56:30.885957 790270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55868->192.168.39.56:22: read: connection reset by peer
I1209 01:56:30.885973 790270 retry.go:31] will retry after 159.894679ms: ssh: handshake failed: read tcp 192.168.39.1:55868->192.168.39.56:22: read: connection reset by peer
W1209 01:56:31.047406 790270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55912->192.168.39.56:22: read: connection reset by peer
I1209 01:56:31.047447 790270 retry.go:31] will retry after 517.041324ms: ssh: handshake failed: read tcp 192.168.39.1:55912->192.168.39.56:22: read: connection reset by peer
I1209 01:56:31.687907 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1209 01:56:31.928222 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1209 01:56:31.943684 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1209 01:56:31.979960 790270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1209 01:56:31.979989 790270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1209 01:56:32.007262 790270 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1209 01:56:32.007296 790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1209 01:56:32.030729 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1209 01:56:32.077837 790270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1209 01:56:32.077866 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1209 01:56:32.098682 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1209 01:56:32.115277 790270 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1209 01:56:32.115311 790270 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1209 01:56:32.174232 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1209 01:56:32.191960 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I1209 01:56:32.212183 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1209 01:56:32.270257 790270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.523162924s)
I1209 01:56:32.270281 790270 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.518500815s)
I1209 01:56:32.270384 790270 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 01:56:32.270485 790270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1209 01:56:32.343461 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1209 01:56:32.377118 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1209 01:56:32.421181 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1209 01:56:32.534208 790270 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1209 01:56:32.534239 790270 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1209 01:56:32.544648 790270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1209 01:56:32.544672 790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1209 01:56:32.586284 790270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1209 01:56:32.586311 790270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1209 01:56:32.652799 790270 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1209 01:56:32.652833 790270 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1209 01:56:32.689440 790270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1209 01:56:32.689474 790270 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1209 01:56:32.866152 790270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1209 01:56:32.866193 790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1209 01:56:32.950224 790270 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1209 01:56:32.950265 790270 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1209 01:56:32.970942 790270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1209 01:56:32.970971 790270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1209 01:56:33.012217 790270 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1209 01:56:33.012249 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1209 01:56:33.035091 790270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1209 01:56:33.035125 790270 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1209 01:56:33.264956 790270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1209 01:56:33.264986 790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1209 01:56:33.351866 790270 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1209 01:56:33.351900 790270 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1209 01:56:33.377442 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1209 01:56:33.442186 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1209 01:56:33.493160 790270 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1209 01:56:33.493198 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1209 01:56:33.610018 790270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1209 01:56:33.610058 790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1209 01:56:33.719009 790270 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1209 01:56:33.719037 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1209 01:56:33.818376 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1209 01:56:33.915771 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.227805617s)
I1209 01:56:33.927450 790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1209 01:56:33.927486 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1209 01:56:34.260305 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1209 01:56:34.275552 790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1209 01:56:34.275586 790270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1209 01:56:34.885261 790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1209 01:56:34.885287 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1209 01:56:34.994611 790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1209 01:56:34.994642 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1209 01:56:35.364391 790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1209 01:56:35.364431 790270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1209 01:56:35.689860 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1209 01:56:36.576722 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.648453464s)
I1209 01:56:38.211302 790270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1209 01:56:38.214879 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:38.215429 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:38.215477 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:38.215671 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:38.435516 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.491782716s)
I1209 01:56:38.483449 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.452671472s)
I1209 01:56:38.483510 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.384783918s)
I1209 01:56:38.483568 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.309302132s)
I1209 01:56:38.908824 790270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1209 01:56:39.160529 790270 addons.go:239] Setting addon gcp-auth=true in "addons-520986"
I1209 01:56:39.160608 790270 host.go:66] Checking if "addons-520986" exists ...
I1209 01:56:39.162929 790270 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1209 01:56:39.165776 790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:39.166436 790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
I1209 01:56:39.166475 790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
I1209 01:56:39.166692 790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
I1209 01:56:44.787014 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.574786374s)
I1209 01:56:44.787091 790270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (12.51654395s)
I1209 01:56:44.787108 790270 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1209 01:56:44.787116 790270 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (12.516695671s)
I1209 01:56:44.787269 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.443769934s)
I1209 01:56:44.787304 790270 addons.go:495] Verifying addon ingress=true in "addons-520986"
I1209 01:56:44.787335 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (12.410171254s)
I1209 01:56:44.787351 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.595355719s)
I1209 01:56:44.787414 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.409947704s)
I1209 01:56:44.787442 790270 addons.go:495] Verifying addon registry=true in "addons-520986"
I1209 01:56:44.787542 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.345316669s)
I1209 01:56:44.787568 790270 addons.go:495] Verifying addon metrics-server=true in "addons-520986"
I1209 01:56:44.787603 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.969190073s)
I1209 01:56:44.788204 790270 node_ready.go:35] waiting up to 6m0s for node "addons-520986" to be "Ready" ...
I1209 01:56:44.787382 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.366179018s)
I1209 01:56:44.787747 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.527408329s)
W1209 01:56:44.788387 790270 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1209 01:56:44.788412 790270 retry.go:31] will retry after 241.528383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1209 01:56:44.787982 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.09807641s)
I1209 01:56:44.788439 790270 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-520986"
I1209 01:56:44.788011 790270 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.625061466s)
I1209 01:56:44.788739 790270 out.go:179] * Verifying ingress addon...
I1209 01:56:44.789727 790270 out.go:179] * Verifying registry addon...
I1209 01:56:44.789728 790270 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-520986 service yakd-dashboard -n yakd-dashboard
I1209 01:56:44.790505 790270 out.go:179] * Verifying csi-hostpath-driver addon...
I1209 01:56:44.790515 790270 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1209 01:56:44.791200 790270 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1209 01:56:44.792156 790270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 01:56:44.792188 790270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1209 01:56:44.793613 790270 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1209 01:56:44.794827 790270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1209 01:56:44.794861 790270 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1209 01:56:44.896959 790270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1209 01:56:44.896990 790270 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1209 01:56:44.906527 790270 node_ready.go:49] node "addons-520986" is "Ready"
I1209 01:56:44.906557 790270 node_ready.go:38] duration metric: took 118.298338ms for node "addons-520986" to be "Ready" ...
I1209 01:56:44.906573 790270 api_server.go:52] waiting for apiserver process to appear ...
I1209 01:56:44.906653 790270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 01:56:44.964542 790270 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1209 01:56:44.964569 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:44.964589 790270 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1209 01:56:44.964606 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:44.964596 790270 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 01:56:44.964623 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:44.993844 790270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1209 01:56:44.993870 790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1209 01:56:45.004366 790270 api_server.go:72] duration metric: took 14.257187733s to wait for apiserver process to appear ...
I1209 01:56:45.004389 790270 api_server.go:88] waiting for apiserver healthz status ...
I1209 01:56:45.004425 790270 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
I1209 01:56:45.030088 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1209 01:56:45.061793 790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1209 01:56:45.066028 790270 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
ok
I1209 01:56:45.121762 790270 api_server.go:141] control plane version: v1.34.2
I1209 01:56:45.121805 790270 api_server.go:131] duration metric: took 117.409361ms to wait for apiserver health ...
I1209 01:56:45.121814 790270 system_pods.go:43] waiting for kube-system pods to appear ...
I1209 01:56:45.200681 790270 system_pods.go:59] 20 kube-system pods found
I1209 01:56:45.200730 790270 system_pods.go:61] "amd-gpu-device-plugin-465wk" [271974ca-7e3b-4c84-8934-f8e107aceaa3] Running
I1209 01:56:45.200736 790270 system_pods.go:61] "coredns-66bc5c9577-j5w2c" [9e9c57dc-b6bd-42be-8a3b-f1e10a9fb863] Running
I1209 01:56:45.200740 790270 system_pods.go:61] "coredns-66bc5c9577-qzn64" [85f77647-b009-4c5e-a48f-443611e37520] Running
I1209 01:56:45.200748 790270 system_pods.go:61] "csi-hostpath-attacher-0" [16b4ff75-ad5f-4f79-9478-0a122848f9a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1209 01:56:45.200753 790270 system_pods.go:61] "csi-hostpath-resizer-0" [3a1a1237-31f7-4ca1-87a4-02b6d2387c27] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1209 01:56:45.200759 790270 system_pods.go:61] "csi-hostpathplugin-mznj5" [d90dc4bb-01fc-4ff5-9f29-33d2a8cd7c4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1209 01:56:45.200763 790270 system_pods.go:61] "etcd-addons-520986" [eff98cf5-6ef4-4096-9da0-f8f6eab8818b] Running
I1209 01:56:45.200770 790270 system_pods.go:61] "kube-apiserver-addons-520986" [1c57a257-5404-4891-8de2-64d25b9280fb] Running
I1209 01:56:45.200773 790270 system_pods.go:61] "kube-controller-manager-addons-520986" [4e29595d-3d0f-4985-a4f3-1b0b0061dbd5] Running
I1209 01:56:45.200778 790270 system_pods.go:61] "kube-ingress-dns-minikube" [f2d2941a-a050-42ba-966c-f2a4c9f45ecf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1209 01:56:45.200781 790270 system_pods.go:61] "kube-proxy-55jwk" [cef9515a-0047-4058-95ce-18b2265f4a40] Running
I1209 01:56:45.200785 790270 system_pods.go:61] "kube-scheduler-addons-520986" [a272e14e-90af-41e0-a5ba-45bd0d3467c6] Running
I1209 01:56:45.200789 790270 system_pods.go:61] "metrics-server-85b7d694d7-6h6ks" [9933e398-1bd2-4f95-9968-ac571b18b98d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1209 01:56:45.200795 790270 system_pods.go:61] "nvidia-device-plugin-daemonset-fmfwp" [6680e716-57e7-4dac-bfc6-474c174bfa12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1209 01:56:45.200804 790270 system_pods.go:61] "registry-6b586f9694-vlvl7" [101e7e22-6338-450e-b175-a29aa66aa838] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1209 01:56:45.200809 790270 system_pods.go:61] "registry-creds-764b6fb674-srdn7" [566b01af-141e-4867-8fff-0b9a84525ab7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1209 01:56:45.200813 790270 system_pods.go:61] "registry-proxy-md9zq" [b449333e-cc2d-4741-a901-fdcbae2dbeeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1209 01:56:45.200821 790270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v4xmh" [5182815a-a54b-4cdf-bb5e-722920ab9087] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1209 01:56:45.200826 790270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vgbx2" [05d44bab-ebf0-4e4c-b9ff-0255e3c6f3ec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1209 01:56:45.200833 790270 system_pods.go:61] "storage-provisioner" [7ab5a70f-f2ad-4920-8048-ba19c19bed2d] Running
I1209 01:56:45.200845 790270 system_pods.go:74] duration metric: took 79.018375ms to wait for pod list to return data ...
I1209 01:56:45.200858 790270 default_sa.go:34] waiting for default service account to be created ...
I1209 01:56:45.298596 790270 default_sa.go:45] found service account: "default"
I1209 01:56:45.298630 790270 default_sa.go:55] duration metric: took 97.765728ms for default service account to be created ...
I1209 01:56:45.298656 790270 system_pods.go:116] waiting for k8s-apps to be running ...
I1209 01:56:45.371360 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:45.371628 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:45.372009 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:45.372225 790270 system_pods.go:86] 20 kube-system pods found
I1209 01:56:45.372249 790270 system_pods.go:89] "amd-gpu-device-plugin-465wk" [271974ca-7e3b-4c84-8934-f8e107aceaa3] Running
I1209 01:56:45.372255 790270 system_pods.go:89] "coredns-66bc5c9577-j5w2c" [9e9c57dc-b6bd-42be-8a3b-f1e10a9fb863] Running
I1209 01:56:45.372259 790270 system_pods.go:89] "coredns-66bc5c9577-qzn64" [85f77647-b009-4c5e-a48f-443611e37520] Running
I1209 01:56:45.372266 790270 system_pods.go:89] "csi-hostpath-attacher-0" [16b4ff75-ad5f-4f79-9478-0a122848f9a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1209 01:56:45.372270 790270 system_pods.go:89] "csi-hostpath-resizer-0" [3a1a1237-31f7-4ca1-87a4-02b6d2387c27] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1209 01:56:45.372279 790270 system_pods.go:89] "csi-hostpathplugin-mznj5" [d90dc4bb-01fc-4ff5-9f29-33d2a8cd7c4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1209 01:56:45.372290 790270 system_pods.go:89] "etcd-addons-520986" [eff98cf5-6ef4-4096-9da0-f8f6eab8818b] Running
I1209 01:56:45.372294 790270 system_pods.go:89] "kube-apiserver-addons-520986" [1c57a257-5404-4891-8de2-64d25b9280fb] Running
I1209 01:56:45.372299 790270 system_pods.go:89] "kube-controller-manager-addons-520986" [4e29595d-3d0f-4985-a4f3-1b0b0061dbd5] Running
I1209 01:56:45.372305 790270 system_pods.go:89] "kube-ingress-dns-minikube" [f2d2941a-a050-42ba-966c-f2a4c9f45ecf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1209 01:56:45.372308 790270 system_pods.go:89] "kube-proxy-55jwk" [cef9515a-0047-4058-95ce-18b2265f4a40] Running
I1209 01:56:45.372311 790270 system_pods.go:89] "kube-scheduler-addons-520986" [a272e14e-90af-41e0-a5ba-45bd0d3467c6] Running
I1209 01:56:45.372320 790270 system_pods.go:89] "metrics-server-85b7d694d7-6h6ks" [9933e398-1bd2-4f95-9968-ac571b18b98d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1209 01:56:45.372328 790270 system_pods.go:89] "nvidia-device-plugin-daemonset-fmfwp" [6680e716-57e7-4dac-bfc6-474c174bfa12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1209 01:56:45.372335 790270 system_pods.go:89] "registry-6b586f9694-vlvl7" [101e7e22-6338-450e-b175-a29aa66aa838] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1209 01:56:45.372342 790270 system_pods.go:89] "registry-creds-764b6fb674-srdn7" [566b01af-141e-4867-8fff-0b9a84525ab7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1209 01:56:45.372346 790270 system_pods.go:89] "registry-proxy-md9zq" [b449333e-cc2d-4741-a901-fdcbae2dbeeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1209 01:56:45.372359 790270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4xmh" [5182815a-a54b-4cdf-bb5e-722920ab9087] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1209 01:56:45.372364 790270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vgbx2" [05d44bab-ebf0-4e4c-b9ff-0255e3c6f3ec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1209 01:56:45.372368 790270 system_pods.go:89] "storage-provisioner" [7ab5a70f-f2ad-4920-8048-ba19c19bed2d] Running
I1209 01:56:45.372378 790270 system_pods.go:126] duration metric: took 73.715957ms to wait for k8s-apps to be running ...
I1209 01:56:45.372385 790270 system_svc.go:44] waiting for kubelet service to be running ....
I1209 01:56:45.372440 790270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1209 01:56:45.385804 790270 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-520986" context rescaled to 1 replicas
I1209 01:56:45.829710 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:45.829959 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:45.830004 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:46.302641 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:46.303957 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:46.303958 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:46.851877 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:46.852022 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:46.856219 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:47.015149 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.984965311s)
I1209 01:56:47.015203 790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.953352972s)
I1209 01:56:47.015242 790270 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.642776551s)
I1209 01:56:47.015272 790270 system_svc.go:56] duration metric: took 1.642881096s WaitForService to wait for kubelet
I1209 01:56:47.015288 790270 kubeadm.go:587] duration metric: took 16.268114878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 01:56:47.015319 790270 node_conditions.go:102] verifying NodePressure condition ...
I1209 01:56:47.016197 790270 addons.go:495] Verifying addon gcp-auth=true in "addons-520986"
I1209 01:56:47.017706 790270 out.go:179] * Verifying gcp-auth addon...
I1209 01:56:47.019863 790270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1209 01:56:47.020318 790270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1209 01:56:47.020348 790270 node_conditions.go:123] node cpu capacity is 2
I1209 01:56:47.020366 790270 node_conditions.go:105] duration metric: took 5.041253ms to run NodePressure ...
I1209 01:56:47.020381 790270 start.go:242] waiting for startup goroutines ...
I1209 01:56:47.027072 790270 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1209 01:56:47.027088 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:47.302404 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:47.303989 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:47.304224 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:47.523580 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:47.797838 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:47.797987 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:47.798771 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:48.024672 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:48.297643 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:48.298213 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:48.298870 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:48.524379 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:48.801341 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:48.801424 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:48.801709 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:49.025766 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:49.296509 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:49.297502 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:49.297799 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:49.525328 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:49.799771 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:49.800096 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:49.800697 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:50.024826 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:50.307822 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:50.307878 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:50.307930 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:50.526200 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:50.800842 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:50.801390 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:50.803241 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:51.025529 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:51.299671 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:51.299695 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:51.299957 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:51.522985 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:51.795111 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:51.797022 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:51.797802 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:52.026634 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:52.301718 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:52.305093 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:52.305412 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:52.524001 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:52.798487 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:52.798779 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:52.798938 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:53.023524 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:53.304561 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:53.304983 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:53.307362 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:53.524447 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:53.795481 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:53.796552 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:53.797605 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:54.025856 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:54.296669 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:54.297336 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:54.297366 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:54.523031 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:54.798213 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:54.798276 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:54.798285 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:55.024071 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:55.297927 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:55.300258 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:55.300521 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:55.527174 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:55.832282 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:55.832373 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:55.835643 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:56.024361 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:56.295199 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:56.296898 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:56.297483 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:56.524383 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:56.803076 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:56.803102 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:56.805410 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:57.024303 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:57.297244 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:57.298062 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:57.298062 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:57.523303 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:57.795741 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:57.799707 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:57.800538 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:58.109484 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:58.297332 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:58.297504 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:58.297662 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:58.523915 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:58.796746 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:58.796883 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:58.797756 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:59.024818 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:59.363098 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:59.363382 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:59.364545 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:59.539100 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:59.796571 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:59.796590 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:59.797589 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:00.028560 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:00.297913 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:00.297944 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:00.298220 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:00.524661 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:00.803936 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:00.805177 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:00.805254 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:01.024816 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:01.296159 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:01.296352 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:01.296644 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:01.524740 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:01.796303 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:01.796812 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:01.798716 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:02.023667 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:02.398732 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:02.400765 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:02.401338 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:02.524962 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:02.798464 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:02.798543 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:02.798821 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:03.030448 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:03.297517 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:03.298456 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:03.299481 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:03.524534 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:03.796859 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:03.797886 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:03.799881 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:04.024290 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:04.302649 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:04.302669 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:04.304353 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:04.528566 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:04.812456 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:04.812558 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:04.812682 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:05.048799 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:05.299832 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:05.325456 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:05.325609 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:05.523412 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:05.802437 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:05.805404 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:05.805647 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:06.028928 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:06.310307 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:06.310334 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:06.311787 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:06.524469 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:06.863833 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:06.863882 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:06.864156 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:07.032161 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:07.296805 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:07.296874 790270 kapi.go:107] duration metric: took 22.504683835s to wait for kubernetes.io/minikube-addons=registry ...
I1209 01:57:07.298411 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:07.524053 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:07.797791 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:07.798656 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:08.023885 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:08.297144 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:08.297306 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:08.523972 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:08.796738 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:08.798095 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:09.023094 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:09.296297 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:09.296897 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:09.527040 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:09.796437 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:09.797060 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:10.023735 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:10.297429 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:10.299624 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:10.524478 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:10.796302 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:10.798107 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:11.023403 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:11.296675 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:11.296685 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:11.638495 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:11.796102 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:11.797189 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:12.023453 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:12.299844 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:12.304308 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:12.533637 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:12.800113 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:12.806819 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:13.025102 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:13.297517 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:13.298413 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:13.524566 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:13.798770 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:13.798924 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:14.025416 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:14.299246 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:14.299249 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:14.522925 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:14.797535 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:14.798039 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:15.025808 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:15.334275 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:15.346993 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:15.523607 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:15.796887 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:15.799238 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:16.023694 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:16.297288 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:16.298855 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:16.527444 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:16.800308 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:16.803960 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:17.025306 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:17.296219 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:17.300381 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:17.523472 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:17.801291 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:17.803298 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:18.028565 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:18.295832 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:18.298355 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:18.523527 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:18.796327 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:18.796348 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:19.024947 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:19.305010 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:19.307474 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:19.524487 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:19.797305 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:19.797365 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:20.023568 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:20.299619 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:20.304026 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:20.523634 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:20.800591 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:20.801509 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:21.023827 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:21.298682 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:21.301278 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:21.524182 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:21.798298 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:21.799596 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:22.024485 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:22.297358 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:22.299075 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:22.527264 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:22.804864 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:22.805573 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:23.024566 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:23.299950 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:23.302857 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:23.541953 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:23.798122 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:23.798965 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:24.023793 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:24.304608 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:24.304981 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:24.524268 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:24.798401 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:24.799095 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:25.024002 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:25.552982 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:25.557543 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:25.562293 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:25.797868 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:25.799915 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:26.023846 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:26.298427 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:26.298779 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:26.523152 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:26.798003 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:26.799947 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:27.023556 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:27.295143 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:27.297442 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:27.524610 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:27.796012 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:27.797042 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:28.022825 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:28.297006 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:28.297044 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:28.523258 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:28.797609 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:28.799517 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:29.024096 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:29.303094 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:29.307055 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:29.524229 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:29.796848 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:29.800891 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:30.027794 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:30.296225 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:30.298397 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:30.524719 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:30.796543 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:30.797410 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:31.025613 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:31.295896 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:31.298057 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:31.523711 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:31.798749 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:31.799977 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:32.024682 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:32.298383 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:32.299047 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:32.523224 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:32.797505 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:32.799391 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:33.023871 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:33.297386 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:33.300040 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:33.526660 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:33.798950 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:33.799766 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:34.030953 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:34.440075 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:34.440333 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:34.545057 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:34.797820 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:34.798518 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:35.026665 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:35.298050 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:35.298087 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:35.529732 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:35.798384 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:35.800921 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:36.025107 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:36.297864 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:36.297911 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:36.523858 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:36.798755 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:36.801345 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:37.043033 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:37.297145 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:37.298561 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:37.523791 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:37.797819 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:37.799076 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:38.025047 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:38.297696 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:38.297912 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:38.522999 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:38.798715 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:38.799489 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:39.026833 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:39.298122 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:39.298735 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:39.523748 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:39.798656 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:39.800478 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:40.026952 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:40.304689 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:40.306282 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:40.525212 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:40.796575 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:40.796770 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:41.024586 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:41.295813 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:41.298016 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:41.523648 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:41.795738 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:41.796333 790270 kapi.go:107] duration metric: took 57.004173314s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1209 01:57:42.023996 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:42.297313 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:42.523679 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:42.796399 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:43.023603 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:43.296278 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:43.523399 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:43.795025 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:44.032306 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:44.298658 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:44.523870 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:44.797945 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:45.032714 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:45.300872 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:45.527870 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:45.811770 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:46.045440 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:46.297148 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:46.523832 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:46.796581 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:47.023324 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:47.296403 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:47.525947 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:47.796766 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:48.024265 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:48.294627 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:48.523656 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:49.045905 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:49.050470 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:49.296524 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:49.524794 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:49.796053 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:50.025010 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:50.295995 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:50.523735 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:50.795744 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:51.040619 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:51.300787 790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:51.524631 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:51.798154 790270 kapi.go:107] duration metric: took 1m7.006948083s to wait for app.kubernetes.io/name=ingress-nginx ...
I1209 01:57:52.028016 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:52.553098 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:53.023624 790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:53.522721 790270 kapi.go:107] duration metric: took 1m6.502854307s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1209 01:57:53.524145 790270 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-520986 cluster.
I1209 01:57:53.525395 790270 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1209 01:57:53.526343 790270 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1209 01:57:53.527527 790270 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, inspektor-gadget, registry-creds, amd-gpu-device-plugin, storage-provisioner-rancher, ingress-dns, cloud-spanner, volcano, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1209 01:57:53.528763 790270 addons.go:530] duration metric: took 1m22.781576259s for enable addons: enabled=[nvidia-device-plugin storage-provisioner inspektor-gadget registry-creds amd-gpu-device-plugin storage-provisioner-rancher ingress-dns cloud-spanner volcano metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1209 01:57:53.528814 790270 start.go:247] waiting for cluster config update ...
I1209 01:57:53.528842 790270 start.go:256] writing updated cluster config ...
I1209 01:57:53.529150 790270 ssh_runner.go:195] Run: rm -f paused
I1209 01:57:53.536902 790270 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1209 01:57:53.541010 790270 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j5w2c" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:53.546405 790270 pod_ready.go:94] pod "coredns-66bc5c9577-j5w2c" is "Ready"
I1209 01:57:53.546433 790270 pod_ready.go:86] duration metric: took 5.395319ms for pod "coredns-66bc5c9577-j5w2c" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:53.548652 790270 pod_ready.go:83] waiting for pod "etcd-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:53.556910 790270 pod_ready.go:94] pod "etcd-addons-520986" is "Ready"
I1209 01:57:53.556939 790270 pod_ready.go:86] duration metric: took 8.263896ms for pod "etcd-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:53.560280 790270 pod_ready.go:83] waiting for pod "kube-apiserver-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:53.566428 790270 pod_ready.go:94] pod "kube-apiserver-addons-520986" is "Ready"
I1209 01:57:53.566452 790270 pod_ready.go:86] duration metric: took 6.146456ms for pod "kube-apiserver-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:53.568528 790270 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:53.941979 790270 pod_ready.go:94] pod "kube-controller-manager-addons-520986" is "Ready"
I1209 01:57:53.942023 790270 pod_ready.go:86] duration metric: took 373.470419ms for pod "kube-controller-manager-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:54.149948 790270 pod_ready.go:83] waiting for pod "kube-proxy-55jwk" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:54.542054 790270 pod_ready.go:94] pod "kube-proxy-55jwk" is "Ready"
I1209 01:57:54.542097 790270 pod_ready.go:86] duration metric: took 392.105036ms for pod "kube-proxy-55jwk" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:54.742233 790270 pod_ready.go:83] waiting for pod "kube-scheduler-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:55.140711 790270 pod_ready.go:94] pod "kube-scheduler-addons-520986" is "Ready"
I1209 01:57:55.140747 790270 pod_ready.go:86] duration metric: took 398.475149ms for pod "kube-scheduler-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:57:55.140759 790270 pod_ready.go:40] duration metric: took 1.603807652s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1209 01:57:55.189030 790270 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
I1209 01:57:55.190941 790270 out.go:179] * Done! kubectl is now configured to use "addons-520986" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
39fe88ba71d2f d4918ca78576a 4 minutes ago Running nginx 0 dbd8aabfaf48f nginx default
8e9d1663fa24f 56cc512116c8f 5 minutes ago Running busybox 0 022924e7b1c19 busybox default
9b68209ea1dc6 e16d1e3a10667 6 minutes ago Running local-path-provisioner 0 ccc510c62a230 local-path-provisioner-648f6765c9-hrjxd local-path-storage
2a56255a22665 d5e667c0f2bb6 7 minutes ago Running amd-gpu-device-plugin 0 79c3913bccb20 amd-gpu-device-plugin-465wk kube-system
d553c7d0bef84 6e38f40d628db 7 minutes ago Running storage-provisioner 0 d4b9d3be67780 storage-provisioner kube-system
23174f258c854 52546a367cc9e 7 minutes ago Running coredns 0 ea664edffd935 coredns-66bc5c9577-j5w2c kube-system
13a4fa31c190b 8aa150647e88a 7 minutes ago Running kube-proxy 0 e1ec5b27c8fd6 kube-proxy-55jwk kube-system
9c7b09226f81d a5f569d49a979 7 minutes ago Running kube-apiserver 0 afc469b9010cf kube-apiserver-addons-520986 kube-system
29e4423f1374f 88320b5498ff2 7 minutes ago Running kube-scheduler 0 2c8f1e2b69c17 kube-scheduler-addons-520986 kube-system
0548de094b66e a3e246e9556e9 7 minutes ago Running etcd 0 8fbf81aa4719c etcd-addons-520986 kube-system
a61c6f513a666 01e8bacf0f500 7 minutes ago Running kube-controller-manager 0 7199f8af8ddcb kube-controller-manager-addons-520986 kube-system
==> containerd <==
Dec 09 02:04:04 addons-520986 containerd[822]: time="2025-12-09T02:04:04.872664177Z" level=info msg="container event discarded" container=b6d8ae3444139c5165bf01f045fbbe8f81cec799b626d7fe5e2b641d32f954b7 type=CONTAINER_DELETED_EVENT
Dec 09 02:04:08 addons-520986 containerd[822]: time="2025-12-09T02:04:08.798003780Z" level=info msg="container event discarded" container=79255f868ba3325e48a459be04f990494c570aa595b9b3778c4cd14d8671b2ed type=CONTAINER_CREATED_EVENT
Dec 09 02:04:08 addons-520986 containerd[822]: time="2025-12-09T02:04:08.798052977Z" level=info msg="container event discarded" container=79255f868ba3325e48a459be04f990494c570aa595b9b3778c4cd14d8671b2ed type=CONTAINER_STARTED_EVENT
Dec 09 02:04:09 addons-520986 containerd[822]: time="2025-12-09T02:04:09.499784148Z" level=info msg="container event discarded" container=27135839a9beaf5344da00f37181e96466f7209ba71851ae74af4b74fdfecbea type=CONTAINER_CREATED_EVENT
Dec 09 02:04:09 addons-520986 containerd[822]: time="2025-12-09T02:04:09.592239431Z" level=info msg="container event discarded" container=27135839a9beaf5344da00f37181e96466f7209ba71851ae74af4b74fdfecbea type=CONTAINER_STARTED_EVENT
Dec 09 02:04:09 addons-520986 containerd[822]: time="2025-12-09T02:04:09.660249552Z" level=info msg="container event discarded" container=27135839a9beaf5344da00f37181e96466f7209ba71851ae74af4b74fdfecbea type=CONTAINER_STOPPED_EVENT
Dec 09 02:04:09 addons-520986 containerd[822]: time="2025-12-09T02:04:09.833974424Z" level=info msg="container event discarded" container=1dc578fd6ba60b648fde0ed7b6085c5a6a527d2340ad7bbc421c0d9e44393fd4 type=CONTAINER_STOPPED_EVENT
Dec 09 02:04:09 addons-520986 containerd[822]: time="2025-12-09T02:04:09.934628923Z" level=info msg="container event discarded" container=92c6653ed758f6f0a75ab0f308c3ced18d738c23c526f387bdaeac8570c294f5 type=CONTAINER_STOPPED_EVENT
Dec 09 02:04:10 addons-520986 containerd[822]: time="2025-12-09T02:04:10.641143934Z" level=info msg="container event discarded" container=4ee38fd6c723372bdd9c225f8e603ca77c8ae63ab6aa0103ad4896598f7e015b type=CONTAINER_STOPPED_EVENT
Dec 09 02:04:10 addons-520986 containerd[822]: time="2025-12-09T02:04:10.724585548Z" level=info msg="container event discarded" container=6afc2b9c115a65b3ab545f380c592edae369d4baead9af5ea76407b410ff9ed1 type=CONTAINER_STOPPED_EVENT
Dec 09 02:04:10 addons-520986 containerd[822]: time="2025-12-09T02:04:10.924620449Z" level=info msg="container event discarded" container=1dc578fd6ba60b648fde0ed7b6085c5a6a527d2340ad7bbc421c0d9e44393fd4 type=CONTAINER_DELETED_EVENT
Dec 09 02:04:10 addons-520986 containerd[822]: time="2025-12-09T02:04:10.988522300Z" level=info msg="container event discarded" container=4ee38fd6c723372bdd9c225f8e603ca77c8ae63ab6aa0103ad4896598f7e015b type=CONTAINER_DELETED_EVENT
Dec 09 02:04:11 addons-520986 containerd[822]: time="2025-12-09T02:04:11.093057406Z" level=info msg="container event discarded" container=79255f868ba3325e48a459be04f990494c570aa595b9b3778c4cd14d8671b2ed type=CONTAINER_STOPPED_EVENT
Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.080606898Z" level=info msg="container event discarded" container=2e7265b84aa37590851169812c5ad9542f9f55876190c4bae4dd0d610bff6dea type=CONTAINER_STOPPED_EVENT
Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.080703406Z" level=info msg="container event discarded" container=41522d71fff1759b9e18f13d7727d7d971af5b4377ee7e9e358ea3421b8e60d6 type=CONTAINER_STOPPED_EVENT
Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.237639711Z" level=info msg="container event discarded" container=1f30f47f9902ce884e4554b111ab2e3305baaa88760959e2500c03a57244621d type=CONTAINER_STOPPED_EVENT
Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.237680927Z" level=info msg="container event discarded" container=9d6277bdcab675adc40d925d9933916abe280d459fd67033f841014e806fbd38 type=CONTAINER_STOPPED_EVENT
Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.489199634Z" level=info msg="container event discarded" container=ea4bbe7cdffb92f0c518f3506abdab64d336924630f8c26676805c2cdf7f6a00 type=CONTAINER_CREATED_EVENT
Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.489248508Z" level=info msg="container event discarded" container=ea4bbe7cdffb92f0c518f3506abdab64d336924630f8c26676805c2cdf7f6a00 type=CONTAINER_STARTED_EVENT
Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.988870636Z" level=info msg="container event discarded" container=41522d71fff1759b9e18f13d7727d7d971af5b4377ee7e9e358ea3421b8e60d6 type=CONTAINER_DELETED_EVENT
Dec 09 02:04:13 addons-520986 containerd[822]: time="2025-12-09T02:04:13.011399305Z" level=info msg="container event discarded" container=2e7265b84aa37590851169812c5ad9542f9f55876190c4bae4dd0d610bff6dea type=CONTAINER_DELETED_EVENT
Dec 09 02:04:14 addons-520986 containerd[822]: time="2025-12-09T02:04:14.635140970Z" level=info msg="container event discarded" container=b1ad20e63555188b95fba57b3713a77cad5b110a358b306cd2a840294e9048f4 type=CONTAINER_CREATED_EVENT
Dec 09 02:04:14 addons-520986 containerd[822]: time="2025-12-09T02:04:14.789575327Z" level=info msg="container event discarded" container=b1ad20e63555188b95fba57b3713a77cad5b110a358b306cd2a840294e9048f4 type=CONTAINER_STARTED_EVENT
Dec 09 02:04:16 addons-520986 containerd[822]: time="2025-12-09T02:04:16.035858359Z" level=info msg="container event discarded" container=1be3d75e31bd14f4f16aeb37cde885b4468e193f452eab254b4da24b0c78ae62 type=CONTAINER_CREATED_EVENT
Dec 09 02:04:16 addons-520986 containerd[822]: time="2025-12-09T02:04:16.035969205Z" level=info msg="container event discarded" container=1be3d75e31bd14f4f16aeb37cde885b4468e193f452eab254b4da24b0c78ae62 type=CONTAINER_STARTED_EVENT
==> coredns [23174f258c8545002487a49e485ba48589d5696413c8722dade1feffb060a643] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] Reloading
[INFO] 10.244.0.27:35550 - 31287 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000431095s
[INFO] 10.244.0.27:45336 - 28415 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166577s
[INFO] 10.244.0.27:49874 - 32896 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149568s
[INFO] 10.244.0.27:40524 - 42867 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000210365s
[INFO] 10.244.0.27:42040 - 44177 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009067s
[INFO] 10.244.0.27:60282 - 4652 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000310362s
[INFO] 10.244.0.27:40634 - 56275 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003974228s
[INFO] 10.244.0.27:53329 - 39965 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002769225s
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 10.244.0.31:56459 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000519537s
[INFO] 10.244.0.31:56979 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00016087s
==> describe nodes <==
Name: addons-520986
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-520986
kubernetes.io/os=linux
minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
minikube.k8s.io/name=addons-520986
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_09T01_56_25_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-520986
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 09 Dec 2025 01:56:22 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-520986
AcquireTime: <unset>
RenewTime: Tue, 09 Dec 2025 02:04:14 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 09 Dec 2025 02:03:13 +0000 Tue, 09 Dec 2025 01:56:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 09 Dec 2025 02:03:13 +0000 Tue, 09 Dec 2025 01:56:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 09 Dec 2025 02:03:13 +0000 Tue, 09 Dec 2025 01:56:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 09 Dec 2025 02:03:13 +0000 Tue, 09 Dec 2025 01:56:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.56
Hostname: addons-520986
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: c1934cbe821945129b0272a0810d6e14
System UUID: c1934cbe-8219-4512-9b02-72a0810d6e14
Boot ID: 5ba98362-4ea2-4d36-99c2-b5350ef5a136
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://2.1.4
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m37s
default hello-world-app-5d498dc89-w92fp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m50s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m58s
kube-system amd-gpu-device-plugin-465wk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m42s
kube-system coredns-66bc5c9577-j5w2c 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 7m46s
kube-system etcd-addons-520986 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 7m53s
kube-system kube-apiserver-addons-520986 250m (12%) 0 (0%) 0 (0%) 0 (0%) 7m51s
kube-system kube-controller-manager-addons-520986 200m (10%) 0 (0%) 0 (0%) 0 (0%) 7m51s
kube-system kube-proxy-55jwk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m46s
kube-system kube-scheduler-addons-520986 100m (5%) 0 (0%) 0 (0%) 0 (0%) 7m51s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m40s
local-path-storage helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15s
local-path-storage local-path-provisioner-648f6765c9-hrjxd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m38s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 170Mi (4%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m44s kube-proxy
Normal NodeAllocatableEnforced 7m58s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 7m58s (x8 over 7m58s) kubelet Node addons-520986 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m58s (x8 over 7m58s) kubelet Node addons-520986 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m58s (x7 over 7m58s) kubelet Node addons-520986 status is now: NodeHasSufficientPID
Normal Starting 7m52s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 7m52s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 7m52s kubelet Node addons-520986 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m52s kubelet Node addons-520986 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m52s kubelet Node addons-520986 status is now: NodeHasSufficientPID
Normal NodeReady 7m51s kubelet Node addons-520986 status is now: NodeReady
Normal RegisteredNode 7m47s node-controller Node addons-520986 event: Registered Node addons-520986 in Controller
Normal CIDRAssignmentFailed 7m47s cidrAllocator Node addons-520986 status is now: CIDRAssignmentFailed
==> dmesg <==
[ +5.460097] kauditd_printk_skb: 107 callbacks suppressed
[ +1.561461] kauditd_printk_skb: 106 callbacks suppressed
[ +3.157294] kauditd_printk_skb: 76 callbacks suppressed
[ +5.033114] kauditd_printk_skb: 71 callbacks suppressed
[ +3.434526] kauditd_printk_skb: 81 callbacks suppressed
[ +0.000049] kauditd_printk_skb: 20 callbacks suppressed
[ +4.821963] kauditd_printk_skb: 86 callbacks suppressed
[Dec 9 01:58] kauditd_printk_skb: 89 callbacks suppressed
[ +0.000027] kauditd_printk_skb: 2 callbacks suppressed
[ +5.897955] kauditd_printk_skb: 26 callbacks suppressed
[ +8.503662] kauditd_printk_skb: 5 callbacks suppressed
[ +0.000051] kauditd_printk_skb: 68 callbacks suppressed
[ +11.573145] kauditd_printk_skb: 41 callbacks suppressed
[ +5.928919] kauditd_printk_skb: 22 callbacks suppressed
[Dec 9 01:59] kauditd_printk_skb: 64 callbacks suppressed
[ +0.000070] kauditd_printk_skb: 31 callbacks suppressed
[ +2.318746] kauditd_printk_skb: 213 callbacks suppressed
[ +0.769251] kauditd_printk_skb: 118 callbacks suppressed
[ +3.703597] kauditd_printk_skb: 48 callbacks suppressed
[ +3.154418] kauditd_printk_skb: 128 callbacks suppressed
[ +1.399107] kauditd_printk_skb: 42 callbacks suppressed
[Dec 9 02:01] kauditd_printk_skb: 107 callbacks suppressed
[ +0.000075] kauditd_printk_skb: 9 callbacks suppressed
[Dec 9 02:03] kauditd_printk_skb: 26 callbacks suppressed
[Dec 9 02:04] kauditd_printk_skb: 9 callbacks suppressed
==> etcd [0548de094b66ecc7dc2fb8fd3cf315649e76d464976ed0add9b985fbfa64ae2d] <==
{"level":"info","ts":"2025-12-09T01:57:25.543151Z","caller":"traceutil/trace.go:172","msg":"trace[1565615534] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1228; }","duration":"251.815281ms","start":"2025-12-09T01:57:25.291238Z","end":"2025-12-09T01:57:25.543053Z","steps":["trace[1565615534] 'agreement among raft nodes before linearized reading' (duration: 251.485261ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:25.543565Z","caller":"traceutil/trace.go:172","msg":"trace[1358099045] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"314.343969ms","start":"2025-12-09T01:57:25.229202Z","end":"2025-12-09T01:57:25.543546Z","steps":["trace[1358099045] 'process raft request' (duration: 29.245942ms)","trace[1358099045] 'compare' (duration: 282.580623ms)"],"step_count":2}
{"level":"warn","ts":"2025-12-09T01:57:25.543650Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T01:57:25.229122Z","time spent":"314.473992ms","remote":"127.0.0.1:46654","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2159,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/snapshot-controller-7d9fbc56b8\" mod_revision:1225 > success:<request_put:<key:\"/registry/replicasets/kube-system/snapshot-controller-7d9fbc56b8\" value_size:2087 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/snapshot-controller-7d9fbc56b8\" > >"}
{"level":"warn","ts":"2025-12-09T01:57:25.546089Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"253.730919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:57:25.546491Z","caller":"traceutil/trace.go:172","msg":"trace[1451489977] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1228; }","duration":"254.133354ms","start":"2025-12-09T01:57:25.292348Z","end":"2025-12-09T01:57:25.546481Z","steps":["trace[1451489977] 'agreement among raft nodes before linearized reading' (duration: 251.636899ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-09T01:57:34.422734Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.367248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:57:34.422783Z","caller":"traceutil/trace.go:172","msg":"trace[558088992] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1272; }","duration":"120.435042ms","start":"2025-12-09T01:57:34.302339Z","end":"2025-12-09T01:57:34.422774Z","steps":["trace[558088992] 'range keys from in-memory index tree' (duration: 120.310553ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-09T01:57:34.423138Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"246.72969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:57:34.423163Z","caller":"traceutil/trace.go:172","msg":"trace[1716658586] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:1272; }","duration":"246.759885ms","start":"2025-12-09T01:57:34.176396Z","end":"2025-12-09T01:57:34.423156Z","steps":["trace[1716658586] 'range keys from in-memory index tree' (duration: 246.660383ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-09T01:57:34.423369Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.841991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-8mws5\" limit:1 ","response":"range_response_count:1 size:3689"}
{"level":"info","ts":"2025-12-09T01:57:34.423387Z","caller":"traceutil/trace.go:172","msg":"trace[1261994678] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-8mws5; range_end:; response_count:1; response_revision:1272; }","duration":"213.862435ms","start":"2025-12-09T01:57:34.209519Z","end":"2025-12-09T01:57:34.423382Z","steps":["trace[1261994678] 'range keys from in-memory index tree' (duration: 213.733914ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-09T01:57:34.423772Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.187176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:57:34.423796Z","caller":"traceutil/trace.go:172","msg":"trace[1094916789] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1272; }","duration":"134.213608ms","start":"2025-12-09T01:57:34.289576Z","end":"2025-12-09T01:57:34.423790Z","steps":["trace[1094916789] 'range keys from in-memory index tree' (duration: 133.964979ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-09T01:57:34.424090Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.470418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:57:34.424109Z","caller":"traceutil/trace.go:172","msg":"trace[798619159] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1272; }","duration":"134.491087ms","start":"2025-12-09T01:57:34.289613Z","end":"2025-12-09T01:57:34.424104Z","steps":["trace[798619159] 'range keys from in-memory index tree' (duration: 134.426524ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:49.037514Z","caller":"traceutil/trace.go:172","msg":"trace[1705176302] linearizableReadLoop","detail":"{readStateIndex:1366; appliedIndex:1366; }","duration":"248.741277ms","start":"2025-12-09T01:57:48.788730Z","end":"2025-12-09T01:57:49.037471Z","steps":["trace[1705176302] 'read index received' (duration: 248.733409ms)","trace[1705176302] 'applied index is now lower than readState.Index' (duration: 7.116µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-09T01:57:49.037846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"249.064888ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:57:49.037991Z","caller":"traceutil/trace.go:172","msg":"trace[1815560509] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1336; }","duration":"249.253975ms","start":"2025-12-09T01:57:48.788726Z","end":"2025-12-09T01:57:49.037980Z","steps":["trace[1815560509] 'agreement among raft nodes before linearized reading' (duration: 248.991154ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:49.040361Z","caller":"traceutil/trace.go:172","msg":"trace[517498940] transaction","detail":"{read_only:false; response_revision:1337; number_of_response:1; }","duration":"279.546135ms","start":"2025-12-09T01:57:48.760800Z","end":"2025-12-09T01:57:49.040346Z","steps":["trace[517498940] 'process raft request' (duration: 277.441241ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:55.943518Z","caller":"traceutil/trace.go:172","msg":"trace[682111699] transaction","detail":"{read_only:false; response_revision:1402; number_of_response:1; }","duration":"115.266343ms","start":"2025-12-09T01:57:55.828236Z","end":"2025-12-09T01:57:55.943503Z","steps":["trace[682111699] 'process raft request' (duration: 115.175095ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:58:28.666620Z","caller":"traceutil/trace.go:172","msg":"trace[2090950996] linearizableReadLoop","detail":"{readStateIndex:1557; appliedIndex:1557; }","duration":"110.477036ms","start":"2025-12-09T01:58:28.556119Z","end":"2025-12-09T01:58:28.666596Z","steps":["trace[2090950996] 'read index received' (duration: 110.470443ms)","trace[2090950996] 'applied index is now lower than readState.Index' (duration: 5.583µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-09T01:58:28.666795Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.600584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:58:28.666825Z","caller":"traceutil/trace.go:172","msg":"trace[648922672] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1517; }","duration":"110.703044ms","start":"2025-12-09T01:58:28.556115Z","end":"2025-12-09T01:58:28.666818Z","steps":["trace[648922672] 'agreement among raft nodes before linearized reading' (duration: 110.571046ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:58:28.670363Z","caller":"traceutil/trace.go:172","msg":"trace[2043086108] transaction","detail":"{read_only:false; response_revision:1518; number_of_response:1; }","duration":"157.908886ms","start":"2025-12-09T01:58:28.512443Z","end":"2025-12-09T01:58:28.670352Z","steps":["trace[2043086108] 'process raft request' (duration: 157.021003ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:59:02.353373Z","caller":"traceutil/trace.go:172","msg":"trace[1399644659] transaction","detail":"{read_only:false; response_revision:1797; number_of_response:1; }","duration":"114.080503ms","start":"2025-12-09T01:59:02.239278Z","end":"2025-12-09T01:59:02.353358Z","steps":["trace[1399644659] 'process raft request' (duration: 113.989135ms)"],"step_count":1}
==> kernel <==
02:04:16 up 8 min, 0 users, load average: 0.40, 0.84, 0.63
Linux addons-520986 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 8 03:04:10 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [9c7b09226f81d86031e8765a14cb5e7f3340dc7dcf392d7c18885cbf0c616449] <==
W1209 01:58:32.259767 1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
E1209 01:58:48.021039 1 conn.go:339] Error on socket receive: read tcp 192.168.39.56:8443->192.168.39.1:60898: use of closed network connection
E1209 01:58:48.211398 1 conn.go:339] Error on socket receive: read tcp 192.168.39.56:8443->192.168.39.1:60934: use of closed network connection
I1209 01:58:57.968330 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.83.165"}
I1209 01:59:08.054171 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1209 01:59:18.811791 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1209 01:59:19.025175 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.207.205"}
I1209 01:59:21.525277 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1209 01:59:26.558074 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.57.27"}
E1209 01:59:29.233732 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
E1209 01:59:30.338983 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
E1209 01:59:30.346217 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
I1209 01:59:37.973182 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1209 01:59:37.973299 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1209 01:59:38.011099 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1209 01:59:38.011531 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1209 01:59:38.013992 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1209 01:59:38.015064 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1209 01:59:38.041722 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1209 01:59:38.041775 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1209 01:59:38.076616 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1209 01:59:38.076668 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1209 01:59:39.014776 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1209 01:59:39.076646 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1209 01:59:39.096034 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
==> kube-controller-manager [a61c6f513a666739d8ddd4935782e2124b1330e63d6b746c6c732fb533713dfd] <==
E1209 02:03:27.246310 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:03:30.342054 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:03:30.344221 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:03:38.014546 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:03:38.015725 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:03:43.493982 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:03:43.495665 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:03:47.703991 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:03:47.705329 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:03:49.884858 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:03:49.886167 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:03:55.131440 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:03:55.132837 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:03:58.597121 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:03:58.598851 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:03:58.675841 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:03:58.678537 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:03:59.210135 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:03:59.211821 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:04:00.477364 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:04:00.478748 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:04:03.743646 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:04:03.745181 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:04:11.766144 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:04:11.767532 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [13a4fa31c190baf19fe2eb0e8e3df418a4708d5340d079b6d6b362a34e8642fc] <==
I1209 01:56:31.331467 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1209 01:56:31.433451 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1209 01:56:31.433489 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.56"]
E1209 01:56:31.433553 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1209 01:56:31.540193 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1209 01:56:31.540360 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1209 01:56:31.540437 1 server_linux.go:132] "Using iptables Proxier"
I1209 01:56:31.574581 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1209 01:56:31.575250 1 server.go:527] "Version info" version="v1.34.2"
I1209 01:56:31.575278 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1209 01:56:31.580608 1 config.go:200] "Starting service config controller"
I1209 01:56:31.580639 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1209 01:56:31.580662 1 config.go:106] "Starting endpoint slice config controller"
I1209 01:56:31.580666 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1209 01:56:31.580675 1 config.go:403] "Starting serviceCIDR config controller"
I1209 01:56:31.580678 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1209 01:56:31.581713 1 config.go:309] "Starting node config controller"
I1209 01:56:31.587626 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1209 01:56:31.600003 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1209 01:56:31.680768 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1209 01:56:31.680793 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1209 01:56:31.680840 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [29e4423f1374f3418ef328d00e60cea3e5564243eb93157de099662229393316] <==
E1209 01:56:22.198590 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1209 01:56:22.199263 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1209 01:56:22.199155 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1209 01:56:22.199479 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1209 01:56:22.199562 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1209 01:56:22.199706 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1209 01:56:22.199852 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1209 01:56:22.199887 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1209 01:56:22.199963 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1209 01:56:22.199967 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1209 01:56:22.200177 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1209 01:56:22.199976 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1209 01:56:23.011765 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1209 01:56:23.073602 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1209 01:56:23.075750 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1209 01:56:23.100842 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1209 01:56:23.136471 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1209 01:56:23.197696 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1209 01:56:23.223284 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1209 01:56:23.376492 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1209 01:56:23.380609 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1209 01:56:23.385845 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1209 01:56:23.455267 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1209 01:56:23.621332 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1209 01:56:25.681986 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 09 02:03:32 addons-520986 kubelet[1518]: I1209 02:03:32.016635 1518 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5fa217e3-a5b5-4e25-a29f-62df9665dd23-data\") on node \"addons-520986\" DevicePath \"\""
Dec 09 02:03:32 addons-520986 kubelet[1518]: I1209 02:03:32.016693 1518 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8g87c\" (UniqueName: \"kubernetes.io/projected/5fa217e3-a5b5-4e25-a29f-62df9665dd23-kube-api-access-8g87c\") on node \"addons-520986\" DevicePath \"\""
Dec 09 02:03:32 addons-520986 kubelet[1518]: I1209 02:03:32.016707 1518 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5fa217e3-a5b5-4e25-a29f-62df9665dd23-script\") on node \"addons-520986\" DevicePath \"\""
Dec 09 02:03:32 addons-520986 kubelet[1518]: I1209 02:03:32.734734 1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-465wk" secret="" err="secret \"gcp-auth\" not found"
Dec 09 02:03:32 addons-520986 kubelet[1518]: I1209 02:03:32.739283 1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fa217e3-a5b5-4e25-a29f-62df9665dd23" path="/var/lib/kubelet/pods/5fa217e3-a5b5-4e25-a29f-62df9665dd23/volumes"
Dec 09 02:03:37 addons-520986 kubelet[1518]: E1209 02:03:37.736715 1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:1.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-w92fp" podUID="3696685f-64cf-4c4c-b75b-aa7a4392f328"
Dec 09 02:03:49 addons-520986 kubelet[1518]: E1209 02:03:49.736262 1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:1.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-w92fp" podUID="3696685f-64cf-4c4c-b75b-aa7a4392f328"
Dec 09 02:04:01 addons-520986 kubelet[1518]: I1209 02:04:01.830531 1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/8f8c48dc-9016-410e-aeaa-f33b2f768570-data\") pod \"helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12\" (UID: \"8f8c48dc-9016-410e-aeaa-f33b2f768570\") " pod="local-path-storage/helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12"
Dec 09 02:04:01 addons-520986 kubelet[1518]: I1209 02:04:01.830591 1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/8f8c48dc-9016-410e-aeaa-f33b2f768570-script\") pod \"helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12\" (UID: \"8f8c48dc-9016-410e-aeaa-f33b2f768570\") " pod="local-path-storage/helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12"
Dec 09 02:04:01 addons-520986 kubelet[1518]: I1209 02:04:01.830614 1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxqr\" (UniqueName: \"kubernetes.io/projected/8f8c48dc-9016-410e-aeaa-f33b2f768570-kube-api-access-lxxqr\") pod \"helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12\" (UID: \"8f8c48dc-9016-410e-aeaa-f33b2f768570\") " pod="local-path-storage/helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12"
Dec 09 02:04:03 addons-520986 kubelet[1518]: E1209 02:04:03.218563 1518 log.go:32] "PullImage from image service failed" err=<
Dec 09 02:04:03 addons-520986 kubelet[1518]: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
Dec 09 02:04:03 addons-520986 kubelet[1518]: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Dec 09 02:04:03 addons-520986 kubelet[1518]: > image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Dec 09 02:04:03 addons-520986 kubelet[1518]: E1209 02:04:03.218655 1518 kuberuntime_image.go:43] "Failed to pull image" err=<
Dec 09 02:04:03 addons-520986 kubelet[1518]: failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
Dec 09 02:04:03 addons-520986 kubelet[1518]: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Dec 09 02:04:03 addons-520986 kubelet[1518]: > image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Dec 09 02:04:03 addons-520986 kubelet[1518]: E1209 02:04:03.218803 1518 kuberuntime_manager.go:1449] "Unhandled Error" err=<
Dec 09 02:04:03 addons-520986 kubelet[1518]: container helper-pod start failed in pod helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12_local-path-storage(8f8c48dc-9016-410e-aeaa-f33b2f768570): ErrImagePull: failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
Dec 09 02:04:03 addons-520986 kubelet[1518]: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Dec 09 02:04:03 addons-520986 kubelet[1518]: > logger="UnhandledError"
Dec 09 02:04:03 addons-520986 kubelet[1518]: E1209 02:04:03.218942 1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12" podUID="8f8c48dc-9016-410e-aeaa-f33b2f768570"
Dec 09 02:04:03 addons-520986 kubelet[1518]: E1209 02:04:03.735804 1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:1.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-w92fp" podUID="3696685f-64cf-4c4c-b75b-aa7a4392f328"
Dec 09 02:04:04 addons-520986 kubelet[1518]: E1209 02:04:04.201183 1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12" podUID="8f8c48dc-9016-410e-aeaa-f33b2f768570"
==> storage-provisioner [d553c7d0bef844770979140bf5e5ea6d82b220e7fee4521ae8638b08b55ed34b] <==
W1209 02:03:51.510045 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:03:53.515136 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:03:53.522757 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:03:55.526180 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:03:55.532245 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:03:57.536227 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:03:57.544175 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:03:59.550778 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:03:59.557669 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:01.561337 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:01.570054 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:03.573544 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:03.578695 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:05.582746 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:05.590801 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:07.594476 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:07.600678 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:09.605422 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:09.611131 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:11.614640 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:11.620463 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:13.625982 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:13.633430 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:15.637218 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:04:15.643106 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-520986 -n addons-520986
helpers_test.go:269: (dbg) Run: kubectl --context addons-520986 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-w92fp test-local-path helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-520986 describe pod hello-world-app-5d498dc89-w92fp test-local-path helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-520986 describe pod hello-world-app-5d498dc89-w92fp test-local-path helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12: exit status 1 (77.518512ms)
-- stdout --
Name: hello-world-app-5d498dc89-w92fp
Namespace: default
Priority: 0
Service Account: default
Node: addons-520986/192.168.39.56
Start Time: Tue, 09 Dec 2025 01:59:26 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP: 10.244.0.35
IPs:
IP: 10.244.0.35
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fjbmf (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-fjbmf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m51s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-w92fp to addons-520986
Normal Pulling 118s (x5 over 4m50s) kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
Warning Failed 117s (x5 over 4m50s) kubelet Failed to pull image "docker.io/kicbase/echo-server:1.0": failed to pull and unpack image "docker.io/kicbase/echo-server:1.0": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 117s (x5 over 4m50s) kubelet Error: ErrImagePull
Warning Failed 55s (x15 over 4m49s) kubelet Error: ImagePullBackOff
Normal BackOff 1s (x19 over 4m49s) kubelet Back-off pulling image "docker.io/kicbase/echo-server:1.0"
Name: test-local-path
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: run=test-local-path
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
busybox:
Image: busybox:stable
Port: <none>
Host Port: <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
Environment: <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8j7c7 (ro)
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
kube-api-access-8j7c7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
-- /stdout --
** stderr **
Error from server (NotFound): pods "helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-520986 describe pod hello-world-app-5d498dc89-w92fp test-local-path helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12: exit status 1
addons_test.go:1113: (dbg) Run: out/minikube-linux-amd64 -p addons-520986 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.805409369s)
--- FAIL: TestAddons/parallel/LocalPath (344.92s)