Test Report: KVM_Linux 21767

                    
                      792b73f7e6a323c75f1a3ad863987d7e01fd8059:2025-10-25:42055
                    
                

Test fail (5/344)

Order failed test Duration
44 TestAddons/parallel/LocalPath 302.1
90 TestFunctional/parallel/DashboardCmd 301.7
99 TestFunctional/parallel/PersistentVolumeClaim 369.58
103 TestFunctional/parallel/MySQL 602.09
296 TestStartStop/group/old-k8s-version/serial/Pause 39.63
x
+
TestAddons/parallel/LocalPath (302.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-442185 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-442185 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-442185 -n addons-442185
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 logs -n 25
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                     ARGS                                                                                                                                                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-639122                                                                                                                                                                                                                                                                                                                                                                                                                       │ download-only-639122 │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ --download-only -p binary-mirror-136948 --alsologtostderr --binary-mirror http://127.0.0.1:37505 --driver=kvm2                                                                                                                                                                                                                                                                                                                                │ binary-mirror-136948 │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ delete  │ -p binary-mirror-136948                                                                                                                                                                                                                                                                                                                                                                                                                       │ binary-mirror-136948 │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p addons-442185                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ addons  │ disable dashboard -p addons-442185                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ start   │ -p addons-442185 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:15 UTC │
	│ addons  │ addons-442185 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                   │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ addons-442185 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ enable headlamp -p addons-442185 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                       │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ addons-442185 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                            │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ addons-442185 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                      │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ addons-442185 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                      │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ addons-442185 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ ip      │ addons-442185 ip                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ addons-442185 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ ssh     │ addons-442185 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                      │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ ip      │ addons-442185 ip                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ addons-442185 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                               │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-442185                                                                                                                                                                                                                                                                                                                                                                │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ addons-442185 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                            │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:16 UTC │
	│ addons  │ addons-442185 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                   │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:16 UTC │ 25 Oct 25 09:17 UTC │
	│ addons  │ addons-442185 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                          │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:17 UTC │ 25 Oct 25 09:17 UTC │
	│ addons  │ addons-442185 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                             │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:17 UTC │ 25 Oct 25 09:17 UTC │
	│ addons  │ addons-442185 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                           │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:17 UTC │ 25 Oct 25 09:17 UTC │
	│ addons  │ addons-442185 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                       │ addons-442185        │ jenkins │ v1.37.0 │ 25 Oct 25 09:17 UTC │ 25 Oct 25 09:17 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:12:07
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:12:07.208457  371983 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:12:07.208758  371983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:12:07.208769  371983 out.go:374] Setting ErrFile to fd 2...
	I1025 09:12:07.208775  371983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:12:07.209021  371983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 09:12:07.209594  371983 out.go:368] Setting JSON to false
	I1025 09:12:07.210579  371983 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3269,"bootTime":1761380258,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:12:07.210674  371983 start.go:141] virtualization: kvm guest
	I1025 09:12:07.212625  371983 out.go:179] * [addons-442185] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:12:07.213793  371983 notify.go:220] Checking for updates...
	I1025 09:12:07.213798  371983 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:12:07.214944  371983 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:12:07.216170  371983 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 09:12:07.217514  371983 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 09:12:07.218950  371983 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:12:07.220103  371983 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:12:07.221387  371983 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:12:07.251883  371983 out.go:179] * Using the kvm2 driver based on user configuration
	I1025 09:12:07.253237  371983 start.go:305] selected driver: kvm2
	I1025 09:12:07.253251  371983 start.go:925] validating driver "kvm2" against <nil>
	I1025 09:12:07.253267  371983 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:12:07.254002  371983 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:12:07.254269  371983 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:12:07.254318  371983 cni.go:84] Creating CNI manager for ""
	I1025 09:12:07.254422  371983 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 09:12:07.254444  371983 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 09:12:07.254509  371983 start.go:349] cluster config:
	{Name:addons-442185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-442185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s}
	I1025 09:12:07.254643  371983 iso.go:125] acquiring lock: {Name:mkaf34b0e79311c874a9b61067611bc0cdebbfac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:07.256821  371983 out.go:179] * Starting "addons-442185" primary control-plane node in "addons-442185" cluster
	I1025 09:12:07.257954  371983 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 09:12:07.257993  371983 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1025 09:12:07.258008  371983 cache.go:58] Caching tarball of preloaded images
	I1025 09:12:07.258130  371983 preload.go:233] Found /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 09:12:07.258142  371983 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1025 09:12:07.258524  371983 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/config.json ...
	I1025 09:12:07.258552  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/config.json: {Name:mk0b85c42bb2e631d6b1878bd841db2b5bb17f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:07.258717  371983 start.go:360] acquireMachinesLock for addons-442185: {Name:mk098acfda26f2145f87464d3ecf0ec8fc8b43f6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 09:12:07.258782  371983 start.go:364] duration metric: took 48.535µs to acquireMachinesLock for "addons-442185"
	I1025 09:12:07.258806  371983 start.go:93] Provisioning new machine with config: &{Name:addons-442185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-442185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 09:12:07.258876  371983 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 09:12:07.260290  371983 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1025 09:12:07.260472  371983 start.go:159] libmachine.API.Create for "addons-442185" (driver="kvm2")
	I1025 09:12:07.260502  371983 client.go:168] LocalClient.Create starting
	I1025 09:12:07.260612  371983 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem
	I1025 09:12:07.415551  371983 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/cert.pem
	I1025 09:12:07.741346  371983 main.go:141] libmachine: creating domain...
	I1025 09:12:07.741367  371983 main.go:141] libmachine: creating network...
	I1025 09:12:07.742949  371983 main.go:141] libmachine: found existing default network
	I1025 09:12:07.743209  371983 main.go:141] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 09:12:07.743823  371983 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e080e0}
	I1025 09:12:07.743924  371983 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-442185</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 09:12:07.750002  371983 main.go:141] libmachine: creating private network mk-addons-442185 192.168.39.0/24...
	I1025 09:12:07.817682  371983 main.go:141] libmachine: private network mk-addons-442185 192.168.39.0/24 created
	I1025 09:12:07.817957  371983 main.go:141] libmachine: <network>
	  <name>mk-addons-442185</name>
	  <uuid>982026cb-4cd4-4397-b8c6-7821f8cb4390</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:bf:ba:cb'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 09:12:07.817998  371983 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185 ...
	I1025 09:12:07.818026  371983 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21767-367343/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1025 09:12:07.818037  371983 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 09:12:07.818126  371983 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21767-367343/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21767-367343/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1025 09:12:08.100568  371983 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa...
	I1025 09:12:08.164335  371983 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/addons-442185.rawdisk...
	I1025 09:12:08.164385  371983 main.go:141] libmachine: Writing magic tar header
	I1025 09:12:08.164406  371983 main.go:141] libmachine: Writing SSH key tar header
	I1025 09:12:08.164475  371983 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185 ...
	I1025 09:12:08.164538  371983 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185
	I1025 09:12:08.164589  371983 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185 (perms=drwx------)
	I1025 09:12:08.164608  371983 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-367343/.minikube/machines
	I1025 09:12:08.164619  371983 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-367343/.minikube/machines (perms=drwxr-xr-x)
	I1025 09:12:08.164630  371983 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 09:12:08.164640  371983 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-367343/.minikube (perms=drwxr-xr-x)
	I1025 09:12:08.164651  371983 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-367343
	I1025 09:12:08.164660  371983 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-367343 (perms=drwxrwxr-x)
	I1025 09:12:08.164674  371983 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1025 09:12:08.164684  371983 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 09:12:08.164693  371983 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1025 09:12:08.164701  371983 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 09:12:08.164712  371983 main.go:141] libmachine: checking permissions on dir: /home
	I1025 09:12:08.164720  371983 main.go:141] libmachine: skipping /home - not owner
	I1025 09:12:08.164724  371983 main.go:141] libmachine: defining domain...
	I1025 09:12:08.166011  371983 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-442185</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/addons-442185.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-442185'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1025 09:12:08.173590  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:1d:ad:7a in network default
	I1025 09:12:08.174315  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:08.174334  371983 main.go:141] libmachine: starting domain...
	I1025 09:12:08.174339  371983 main.go:141] libmachine: ensuring networks are active...
	I1025 09:12:08.175029  371983 main.go:141] libmachine: Ensuring network default is active
	I1025 09:12:08.175485  371983 main.go:141] libmachine: Ensuring network mk-addons-442185 is active
	I1025 09:12:08.176118  371983 main.go:141] libmachine: getting domain XML...
	I1025 09:12:08.177097  371983 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-442185</name>
	  <uuid>f8a191ff-2d22-44bf-b68e-2a9ddecda6ac</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/addons-442185.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:70:69:a7'/>
	      <source network='mk-addons-442185'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:1d:ad:7a'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1025 09:12:09.479085  371983 main.go:141] libmachine: waiting for domain to start...
	I1025 09:12:09.480430  371983 main.go:141] libmachine: domain is now running
	I1025 09:12:09.480451  371983 main.go:141] libmachine: waiting for IP...
	I1025 09:12:09.481119  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:09.481574  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:09.481589  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:09.481862  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:09.481905  371983 retry.go:31] will retry after 273.915103ms: waiting for domain to come up
	I1025 09:12:09.757440  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:09.757960  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:09.757981  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:09.758271  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:09.758310  371983 retry.go:31] will retry after 326.545542ms: waiting for domain to come up
	I1025 09:12:10.086819  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:10.087379  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:10.087395  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:10.087691  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:10.087731  371983 retry.go:31] will retry after 351.884682ms: waiting for domain to come up
	I1025 09:12:10.441332  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:10.441868  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:10.441886  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:10.442164  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:10.442241  371983 retry.go:31] will retry after 582.526213ms: waiting for domain to come up
	I1025 09:12:11.026002  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:11.026543  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:11.026563  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:11.026899  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:11.026941  371983 retry.go:31] will retry after 595.623723ms: waiting for domain to come up
	I1025 09:12:11.623841  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:11.624456  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:11.624481  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:11.624765  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:11.624813  371983 retry.go:31] will retry after 715.843539ms: waiting for domain to come up
	I1025 09:12:12.341996  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:12.342708  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:12.342730  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:12.343101  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:12.343150  371983 retry.go:31] will retry after 895.196569ms: waiting for domain to come up
	I1025 09:12:13.240215  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:13.240769  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:13.240788  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:13.241115  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:13.241159  371983 retry.go:31] will retry after 1.190732558s: waiting for domain to come up
	I1025 09:12:14.433723  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:14.434392  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:14.434412  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:14.434719  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:14.434781  371983 retry.go:31] will retry after 1.484009035s: waiting for domain to come up
	I1025 09:12:15.920441  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:15.920925  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:15.920941  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:15.921250  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:15.921300  371983 retry.go:31] will retry after 1.672094172s: waiting for domain to come up
	I1025 09:12:17.595979  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:17.596633  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:17.596659  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:17.597031  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:17.597085  371983 retry.go:31] will retry after 2.639154666s: waiting for domain to come up
	I1025 09:12:20.239807  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:20.240365  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:20.240382  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:20.240656  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:20.240696  371983 retry.go:31] will retry after 2.199283474s: waiting for domain to come up
	I1025 09:12:22.441393  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:22.441918  371983 main.go:141] libmachine: no network interface addresses found for domain addons-442185 (source=lease)
	I1025 09:12:22.441931  371983 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:12:22.442179  371983 main.go:141] libmachine: unable to find current IP address of domain addons-442185 in network mk-addons-442185 (interfaces detected: [])
	I1025 09:12:22.442262  371983 retry.go:31] will retry after 3.730218315s: waiting for domain to come up
	I1025 09:12:26.177135  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.177726  371983 main.go:141] libmachine: domain addons-442185 has current primary IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.177740  371983 main.go:141] libmachine: found domain IP: 192.168.39.30
	I1025 09:12:26.177748  371983 main.go:141] libmachine: reserving static IP address...
	I1025 09:12:26.178183  371983 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-442185", mac: "52:54:00:70:69:a7", ip: "192.168.39.30"} in network mk-addons-442185
	I1025 09:12:26.370997  371983 main.go:141] libmachine: reserved static IP address 192.168.39.30 for domain addons-442185
	I1025 09:12:26.371026  371983 main.go:141] libmachine: waiting for SSH...
	I1025 09:12:26.371034  371983 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 09:12:26.373811  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.374239  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:26.374290  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.374542  371983 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:26.374837  371983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I1025 09:12:26.374849  371983 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 09:12:26.477929  371983 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:12:26.478477  371983 main.go:141] libmachine: domain creation complete
	I1025 09:12:26.480045  371983 machine.go:93] provisionDockerMachine start ...
	I1025 09:12:26.482134  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.482538  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:26.482567  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.482746  371983 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:26.482953  371983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I1025 09:12:26.482963  371983 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:12:26.583754  371983 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 09:12:26.583784  371983 buildroot.go:166] provisioning hostname "addons-442185"
	I1025 09:12:26.587009  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.587474  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:26.587501  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.587723  371983 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:26.587991  371983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I1025 09:12:26.588007  371983 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-442185 && echo "addons-442185" | sudo tee /etc/hostname
	I1025 09:12:26.706556  371983 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-442185
	
	I1025 09:12:26.709720  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.710126  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:26.710160  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.710357  371983 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:26.710571  371983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I1025 09:12:26.710587  371983 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-442185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-442185/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-442185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:12:26.821038  371983 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:12:26.821076  371983 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21767-367343/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-367343/.minikube}
	I1025 09:12:26.821097  371983 buildroot.go:174] setting up certificates
	I1025 09:12:26.821107  371983 provision.go:84] configureAuth start
	I1025 09:12:26.824345  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.824771  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:26.824797  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.827618  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.828106  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:26.828136  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:26.828276  371983 provision.go:143] copyHostCerts
	I1025 09:12:26.828347  371983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-367343/.minikube/ca.pem (1078 bytes)
	I1025 09:12:26.828466  371983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-367343/.minikube/cert.pem (1123 bytes)
	I1025 09:12:26.828550  371983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-367343/.minikube/key.pem (1675 bytes)
	I1025 09:12:26.828599  371983 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-367343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca-key.pem org=jenkins.addons-442185 san=[127.0.0.1 192.168.39.30 addons-442185 localhost minikube]
	I1025 09:12:27.204039  371983 provision.go:177] copyRemoteCerts
	I1025 09:12:27.204113  371983 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:12:27.207210  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:27.208039  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:27.208074  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:27.208294  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:27.290131  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:12:27.318775  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:12:27.347588  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 09:12:27.375163  371983 provision.go:87] duration metric: took 554.038192ms to configureAuth
	I1025 09:12:27.375210  371983 buildroot.go:189] setting minikube options for container-runtime
	I1025 09:12:27.375485  371983 config.go:182] Loaded profile config "addons-442185": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:12:27.378200  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:27.378616  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:27.378641  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:27.378790  371983 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:27.379004  371983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I1025 09:12:27.379017  371983 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 09:12:27.482315  371983 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 09:12:27.482340  371983 buildroot.go:70] root file system type: tmpfs
	I1025 09:12:27.482505  371983 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 09:12:27.485759  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:27.486178  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:27.486224  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:27.486438  371983 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:27.486709  371983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I1025 09:12:27.486783  371983 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 09:12:27.604999  371983 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 09:12:27.607414  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:27.607796  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:27.607822  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:27.607986  371983 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:27.608179  371983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I1025 09:12:27.608209  371983 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 09:12:28.486291  371983 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1025 09:12:28.486328  371983 machine.go:96] duration metric: took 2.006263481s to provisionDockerMachine
	I1025 09:12:28.486340  371983 client.go:171] duration metric: took 21.225828079s to LocalClient.Create
	I1025 09:12:28.486359  371983 start.go:167] duration metric: took 21.2258887s to libmachine.API.Create "addons-442185"
	I1025 09:12:28.486367  371983 start.go:293] postStartSetup for "addons-442185" (driver="kvm2")
	I1025 09:12:28.486380  371983 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:12:28.486457  371983 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:12:28.489072  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.489466  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:28.489521  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.489711  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:28.572024  371983 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:12:28.576733  371983 info.go:137] Remote host: Buildroot 2025.02
	I1025 09:12:28.576766  371983 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-367343/.minikube/addons for local assets ...
	I1025 09:12:28.576866  371983 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-367343/.minikube/files for local assets ...
	I1025 09:12:28.576903  371983 start.go:296] duration metric: took 90.528212ms for postStartSetup
	I1025 09:12:28.579871  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.580377  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:28.580406  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.580701  371983 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/config.json ...
	I1025 09:12:28.580897  371983 start.go:128] duration metric: took 21.322008892s to createHost
	I1025 09:12:28.583217  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.583602  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:28.583627  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.583822  371983 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:28.584044  371983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I1025 09:12:28.584056  371983 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 09:12:28.686802  371983 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761383548.663003769
	
	I1025 09:12:28.686835  371983 fix.go:216] guest clock: 1761383548.663003769
	I1025 09:12:28.686853  371983 fix.go:229] Guest: 2025-10-25 09:12:28.663003769 +0000 UTC Remote: 2025-10-25 09:12:28.580910001 +0000 UTC m=+21.422107989 (delta=82.093768ms)
	I1025 09:12:28.686879  371983 fix.go:200] guest clock delta is within tolerance: 82.093768ms
	I1025 09:12:28.686889  371983 start.go:83] releasing machines lock for "addons-442185", held for 21.428092531s
	I1025 09:12:28.690014  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.690470  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:28.690494  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.691137  371983 ssh_runner.go:195] Run: cat /version.json
	I1025 09:12:28.691202  371983 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:12:28.694421  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.694527  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.694874  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:28.694903  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.694947  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:28.694968  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:28.695156  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:28.695268  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:28.802755  371983 ssh_runner.go:195] Run: systemctl --version
	I1025 09:12:28.809652  371983 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:12:28.815815  371983 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:12:28.815905  371983 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:12:28.835815  371983 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:12:28.835855  371983 start.go:495] detecting cgroup driver to use...
	I1025 09:12:28.836009  371983 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:12:28.858335  371983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1025 09:12:28.870853  371983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 09:12:28.883737  371983 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 09:12:28.883807  371983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 09:12:28.896223  371983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 09:12:28.909023  371983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 09:12:28.921526  371983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 09:12:28.934454  371983 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:12:28.947771  371983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 09:12:28.960466  371983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1025 09:12:28.972625  371983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1025 09:12:28.985210  371983 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:12:28.995676  371983 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 09:12:28.995734  371983 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 09:12:29.007891  371983 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:12:29.018572  371983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:12:29.165260  371983 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 09:12:29.202021  371983 start.go:495] detecting cgroup driver to use...
	I1025 09:12:29.202124  371983 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 09:12:29.219046  371983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:12:29.235155  371983 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:12:29.256275  371983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:12:29.271467  371983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 09:12:29.292115  371983 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 09:12:29.328952  371983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 09:12:29.344964  371983 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:12:29.368303  371983 ssh_runner.go:195] Run: which cri-dockerd
	I1025 09:12:29.372710  371983 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 09:12:29.384141  371983 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1025 09:12:29.405606  371983 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 09:12:29.551548  371983 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 09:12:29.698944  371983 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 09:12:29.699109  371983 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 09:12:29.725214  371983 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1025 09:12:29.740314  371983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:12:29.885463  371983 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 09:12:30.323913  371983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:12:30.339594  371983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1025 09:12:30.354783  371983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 09:12:30.370506  371983 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 09:12:30.513227  371983 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 09:12:30.657800  371983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:12:30.801881  371983 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 09:12:30.839036  371983 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1025 09:12:30.855052  371983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:12:30.993702  371983 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1025 09:12:31.094124  371983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 09:12:31.113591  371983 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 09:12:31.113683  371983 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 09:12:31.119678  371983 start.go:563] Will wait 60s for crictl version
	I1025 09:12:31.119768  371983 ssh_runner.go:195] Run: which crictl
	I1025 09:12:31.123929  371983 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 09:12:31.162888  371983 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.1
	RuntimeApiVersion:  v1
	I1025 09:12:31.162970  371983 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 09:12:31.191762  371983 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 09:12:31.217681  371983 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
	I1025 09:12:31.220373  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:31.220739  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:31.220763  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:31.220984  371983 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 09:12:31.225426  371983 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:12:31.240106  371983 kubeadm.go:883] updating cluster {Name:addons-442185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-442185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:12:31.240274  371983 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 09:12:31.240334  371983 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 09:12:31.259739  371983 docker.go:691] Got preloaded images: 
	I1025 09:12:31.259766  371983 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.1 wasn't preloaded
	I1025 09:12:31.259839  371983 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 09:12:31.271827  371983 ssh_runner.go:195] Run: which lz4
	I1025 09:12:31.275844  371983 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 09:12:31.280579  371983 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 09:12:31.280619  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353378914 bytes)
	I1025 09:12:32.467310  371983 docker.go:655] duration metric: took 1.191497379s to copy over tarball
	I1025 09:12:32.467385  371983 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 09:12:33.786655  371983 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.319231877s)
	I1025 09:12:33.786701  371983 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 09:12:33.835119  371983 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 09:12:33.850467  371983 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I1025 09:12:33.872877  371983 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1025 09:12:33.890220  371983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:12:34.036231  371983 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 09:12:36.389244  371983 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.352967674s)
	I1025 09:12:36.389359  371983 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 09:12:36.409035  371983 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 09:12:36.409070  371983 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:12:36.409083  371983 kubeadm.go:934] updating node { 192.168.39.30 8443 v1.34.1 docker true true} ...
	I1025 09:12:36.409221  371983 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-442185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-442185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:12:36.409286  371983 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 09:12:36.461417  371983 cni.go:84] Creating CNI manager for ""
	I1025 09:12:36.461475  371983 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 09:12:36.461506  371983 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:12:36.461534  371983 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.30 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-442185 NodeName:addons-442185 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:12:36.461672  371983 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-442185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.30"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.30"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:12:36.461754  371983 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:12:36.473996  371983 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:12:36.474095  371983 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:12:36.485836  371983 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1025 09:12:36.506357  371983 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:12:36.526851  371983 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1025 09:12:36.547709  371983 ssh_runner.go:195] Run: grep 192.168.39.30	control-plane.minikube.internal$ /etc/hosts
	I1025 09:12:36.551843  371983 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:12:36.566438  371983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:12:36.713634  371983 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:12:36.750165  371983 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185 for IP: 192.168.39.30
	I1025 09:12:36.750209  371983 certs.go:195] generating shared ca certs ...
	I1025 09:12:36.750228  371983 certs.go:227] acquiring lock for ca certs: {Name:mk95947bc4fdffa4fda6bcfa90d00796a47f868e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:36.750380  371983 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-367343/.minikube/ca.key
	I1025 09:12:36.920679  371983 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-367343/.minikube/ca.crt ...
	I1025 09:12:36.920713  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/ca.crt: {Name:mkdc8b5a7a52e09272b380bdf0408d89d8b46fa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:36.920898  371983 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-367343/.minikube/ca.key ...
	I1025 09:12:36.920911  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/ca.key: {Name:mkdf34a1ad169e34be252d638d833e72572fc8df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:36.920987  371983 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-367343/.minikube/proxy-client-ca.key
	I1025 09:12:37.136069  371983 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-367343/.minikube/proxy-client-ca.crt ...
	I1025 09:12:37.136108  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/proxy-client-ca.crt: {Name:mk991c0065ee221b323efba19529530674233240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:37.136317  371983 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-367343/.minikube/proxy-client-ca.key ...
	I1025 09:12:37.136330  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/proxy-client-ca.key: {Name:mk10262a612f9547ba45c9057ab5538c183143f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:37.136406  371983 certs.go:257] generating profile certs ...
	I1025 09:12:37.136475  371983 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.key
	I1025 09:12:37.136503  371983 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt with IP's: []
	I1025 09:12:37.234925  371983 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt ...
	I1025 09:12:37.234962  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: {Name:mk2e1896bc25b9885366f85b736c5cba3f7be801 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:37.235176  371983 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.key ...
	I1025 09:12:37.235204  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.key: {Name:mk1df2714914d1a099b11b5e916af2922c858369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:37.235326  371983 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.key.b8692520
	I1025 09:12:37.235347  371983 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.crt.b8692520 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.30]
	I1025 09:12:37.444583  371983 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.crt.b8692520 ...
	I1025 09:12:37.444618  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.crt.b8692520: {Name:mk61b91c4281cf473e8cc7e1b3f68e64fea6d31a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:37.444824  371983 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.key.b8692520 ...
	I1025 09:12:37.444847  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.key.b8692520: {Name:mkf98e435b5cd1440168fc6de97c05f3a2bbf203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:37.444954  371983 certs.go:382] copying /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.crt.b8692520 -> /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.crt
	I1025 09:12:37.445034  371983 certs.go:386] copying /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.key.b8692520 -> /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.key
	I1025 09:12:37.445092  371983 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/proxy-client.key
	I1025 09:12:37.445111  371983 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/proxy-client.crt with IP's: []
	I1025 09:12:37.574573  371983 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/proxy-client.crt ...
	I1025 09:12:37.574606  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/proxy-client.crt: {Name:mk7ba0d82aa0e063f36fa13352164f11f970b26e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:37.574812  371983 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/proxy-client.key ...
	I1025 09:12:37.574830  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/proxy-client.key: {Name:mk4138d4137fa9d0b7fee739a7fb96a656f63a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:37.575078  371983 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:12:37.575119  371983 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:12:37.575144  371983 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:12:37.575166  371983 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/key.pem (1675 bytes)
	I1025 09:12:37.575809  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:12:37.611272  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:12:37.651340  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:12:37.685154  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:12:37.713634  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:12:37.743176  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:12:37.773359  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:12:37.802655  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:12:37.832416  371983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:12:37.862754  371983 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:12:37.883464  371983 ssh_runner.go:195] Run: openssl version
	I1025 09:12:37.890101  371983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:12:37.903347  371983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:12:37.908395  371983 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:12:37.908462  371983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:12:37.915753  371983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:12:37.929308  371983 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:12:37.934328  371983 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:12:37.934389  371983 kubeadm.go:400] StartCluster: {Name:addons-442185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-442185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:12:37.934500  371983 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 09:12:37.953106  371983 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:12:37.964990  371983 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:12:37.976805  371983 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:12:37.988421  371983 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:12:37.988444  371983 kubeadm.go:157] found existing configuration files:
	
	I1025 09:12:37.988494  371983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:12:37.998829  371983 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:12:37.998910  371983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:12:38.010414  371983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:12:38.020963  371983 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:12:38.021043  371983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:12:38.032448  371983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:12:38.042993  371983 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:12:38.043057  371983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:12:38.054625  371983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:12:38.065756  371983 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:12:38.065843  371983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:12:38.077290  371983 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 09:12:38.126147  371983 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:12:38.126246  371983 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:12:38.230388  371983 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:12:38.230543  371983 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:12:38.230711  371983 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:12:38.248478  371983 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:12:38.250343  371983 out.go:252]   - Generating certificates and keys ...
	I1025 09:12:38.250450  371983 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:12:38.250554  371983 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:12:38.540426  371983 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:12:38.697465  371983 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:12:38.753604  371983 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:12:38.858997  371983 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:12:39.098100  371983 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:12:39.098297  371983 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-442185 localhost] and IPs [192.168.39.30 127.0.0.1 ::1]
	I1025 09:12:39.272739  371983 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:12:39.272927  371983 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-442185 localhost] and IPs [192.168.39.30 127.0.0.1 ::1]
	I1025 09:12:39.431032  371983 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:12:39.568831  371983 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:12:40.353385  371983 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:12:40.353476  371983 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:12:40.773171  371983 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:12:40.987967  371983 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:12:41.008954  371983 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:12:41.047731  371983 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:12:41.186847  371983 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:12:41.187410  371983 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:12:41.189645  371983 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:12:41.191817  371983 out.go:252]   - Booting up control plane ...
	I1025 09:12:41.191917  371983 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:12:41.192694  371983 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:12:41.192773  371983 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:12:41.210567  371983 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:12:41.210714  371983 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:12:41.218017  371983 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:12:41.218735  371983 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:12:41.219071  371983 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:12:41.390944  371983 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:12:41.391060  371983 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:12:41.898905  371983 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 509.029892ms
	I1025 09:12:41.902573  371983 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:12:41.902697  371983 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.30:8443/livez
	I1025 09:12:41.902837  371983 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:12:41.902972  371983 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:12:44.713721  371983 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.812122992s
	I1025 09:12:45.806085  371983 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.905282143s
	I1025 09:12:47.902793  371983 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002662281s
	I1025 09:12:47.917646  371983 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:12:47.936983  371983 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:12:47.955806  371983 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:12:47.956122  371983 kubeadm.go:318] [mark-control-plane] Marking the node addons-442185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:12:47.972704  371983 kubeadm.go:318] [bootstrap-token] Using token: 81u7v9.hoz2j3kryw0s9sc5
	I1025 09:12:47.973956  371983 out.go:252]   - Configuring RBAC rules ...
	I1025 09:12:47.974109  371983 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:12:47.979824  371983 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:12:47.989647  371983 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:12:47.993976  371983 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:12:47.997911  371983 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:12:48.005135  371983 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:12:48.312458  371983 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:12:48.777708  371983 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:12:49.309619  371983 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:12:49.310610  371983 kubeadm.go:318] 
	I1025 09:12:49.310727  371983 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:12:49.310751  371983 kubeadm.go:318] 
	I1025 09:12:49.310847  371983 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:12:49.310856  371983 kubeadm.go:318] 
	I1025 09:12:49.310911  371983 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:12:49.311011  371983 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:12:49.311062  371983 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:12:49.311070  371983 kubeadm.go:318] 
	I1025 09:12:49.311113  371983 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:12:49.311118  371983 kubeadm.go:318] 
	I1025 09:12:49.311160  371983 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:12:49.311165  371983 kubeadm.go:318] 
	I1025 09:12:49.311218  371983 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:12:49.311298  371983 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:12:49.311360  371983 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:12:49.311365  371983 kubeadm.go:318] 
	I1025 09:12:49.311441  371983 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:12:49.311550  371983 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:12:49.311561  371983 kubeadm.go:318] 
	I1025 09:12:49.311665  371983 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 81u7v9.hoz2j3kryw0s9sc5 \
	I1025 09:12:49.311811  371983 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0b111886a5743c78eab3487e478733208f36d4f6d16c51fd97c6b7c0a27a2373 \
	I1025 09:12:49.311851  371983 kubeadm.go:318] 	--control-plane 
	I1025 09:12:49.311862  371983 kubeadm.go:318] 
	I1025 09:12:49.311970  371983 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:12:49.311988  371983 kubeadm.go:318] 
	I1025 09:12:49.312089  371983 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 81u7v9.hoz2j3kryw0s9sc5 \
	I1025 09:12:49.312242  371983 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0b111886a5743c78eab3487e478733208f36d4f6d16c51fd97c6b7c0a27a2373 
	I1025 09:12:49.313591  371983 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:12:49.313634  371983 cni.go:84] Creating CNI manager for ""
	I1025 09:12:49.313655  371983 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 09:12:49.315351  371983 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 09:12:49.316420  371983 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 09:12:49.329903  371983 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 09:12:49.352623  371983 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:12:49.352716  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:49.352782  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-442185 minikube.k8s.io/updated_at=2025_10_25T09_12_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=addons-442185 minikube.k8s.io/primary=true
	I1025 09:12:49.478734  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:49.510242  371983 ops.go:34] apiserver oom_adj: -16
	I1025 09:12:49.978975  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:50.478867  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:50.979081  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:51.479443  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:51.979525  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:52.479037  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:52.979013  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:53.479653  371983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:53.558759  371983 kubeadm.go:1113] duration metric: took 4.206112057s to wait for elevateKubeSystemPrivileges
	I1025 09:12:53.558810  371983 kubeadm.go:402] duration metric: took 15.624423735s to StartCluster
	I1025 09:12:53.558835  371983 settings.go:142] acquiring lock: {Name:mk07c5928ffa5e1a3fd7403d40bdc041a1f9dc04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:53.558977  371983 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 09:12:53.559429  371983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/kubeconfig: {Name:mk0d177e5fe141fa9f67d394b101fd50eaede9bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:53.559642  371983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:12:53.559649  371983 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 09:12:53.559708  371983 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 09:12:53.559852  371983 addons.go:69] Setting yakd=true in profile "addons-442185"
	I1025 09:12:53.559874  371983 addons.go:238] Setting addon yakd=true in "addons-442185"
	I1025 09:12:53.559880  371983 addons.go:69] Setting default-storageclass=true in profile "addons-442185"
	I1025 09:12:53.559891  371983 addons.go:69] Setting inspektor-gadget=true in profile "addons-442185"
	I1025 09:12:53.559914  371983 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-442185"
	I1025 09:12:53.559922  371983 addons.go:238] Setting addon inspektor-gadget=true in "addons-442185"
	I1025 09:12:53.559931  371983 addons.go:69] Setting storage-provisioner=true in profile "addons-442185"
	I1025 09:12:53.559931  371983 config.go:182] Loaded profile config "addons-442185": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:12:53.559944  371983 addons.go:238] Setting addon storage-provisioner=true in "addons-442185"
	I1025 09:12:53.559963  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.559963  371983 addons.go:69] Setting cloud-spanner=true in profile "addons-442185"
	I1025 09:12:53.559978  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.559983  371983 addons.go:238] Setting addon cloud-spanner=true in "addons-442185"
	I1025 09:12:53.559989  371983 addons.go:69] Setting volcano=true in profile "addons-442185"
	I1025 09:12:53.560001  371983 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-442185"
	I1025 09:12:53.560012  371983 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-442185"
	I1025 09:12:53.560021  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.560023  371983 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-442185"
	I1025 09:12:53.560027  371983 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-442185"
	I1025 09:12:53.560060  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.560264  371983 addons.go:69] Setting volumesnapshots=true in profile "addons-442185"
	I1025 09:12:53.560289  371983 addons.go:238] Setting addon volumesnapshots=true in "addons-442185"
	I1025 09:12:53.560315  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.560638  371983 addons.go:69] Setting ingress=true in profile "addons-442185"
	I1025 09:12:53.560675  371983 addons.go:238] Setting addon ingress=true in "addons-442185"
	I1025 09:12:53.560734  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.561000  371983 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-442185"
	I1025 09:12:53.561037  371983 addons.go:69] Setting ingress-dns=true in profile "addons-442185"
	I1025 09:12:53.561054  371983 addons.go:238] Setting addon ingress-dns=true in "addons-442185"
	I1025 09:12:53.561055  371983 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-442185"
	I1025 09:12:53.561084  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.559922  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.561098  371983 addons.go:69] Setting gcp-auth=true in profile "addons-442185"
	I1025 09:12:53.561121  371983 mustload.go:65] Loading cluster: addons-442185
	I1025 09:12:53.559989  371983 addons.go:69] Setting metrics-server=true in profile "addons-442185"
	I1025 09:12:53.561177  371983 addons.go:238] Setting addon metrics-server=true in "addons-442185"
	I1025 09:12:53.561223  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.561313  371983 config.go:182] Loaded profile config "addons-442185": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:12:53.559961  371983 addons.go:69] Setting registry-creds=true in profile "addons-442185"
	I1025 09:12:53.561386  371983 addons.go:238] Setting addon registry-creds=true in "addons-442185"
	I1025 09:12:53.561411  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.561816  371983 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-442185"
	I1025 09:12:53.561841  371983 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-442185"
	I1025 09:12:53.561869  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.562005  371983 addons.go:69] Setting registry=true in profile "addons-442185"
	I1025 09:12:53.562029  371983 addons.go:238] Setting addon registry=true in "addons-442185"
	I1025 09:12:53.562057  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.560002  371983 addons.go:238] Setting addon volcano=true in "addons-442185"
	I1025 09:12:53.562142  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.561088  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.562975  371983 out.go:179] * Verifying Kubernetes components...
	I1025 09:12:53.564368  371983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:12:53.568024  371983 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-442185"
	I1025 09:12:53.568064  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.568270  371983 addons.go:238] Setting addon default-storageclass=true in "addons-442185"
	I1025 09:12:53.568319  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.568932  371983 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 09:12:53.568983  371983 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 09:12:53.568984  371983 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:12:53.569836  371983 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 09:12:53.569864  371983 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 09:12:53.569217  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:12:53.570680  371983 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 09:12:53.570697  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 09:12:53.571368  371983 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 09:12:53.571384  371983 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 09:12:53.571497  371983 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:12:53.571381  371983 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:12:53.571540  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 09:12:53.571524  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:12:53.572118  371983 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 09:12:53.572121  371983 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 09:12:53.572133  371983 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 09:12:53.572121  371983 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 09:12:53.572121  371983 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 09:12:53.572207  371983 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 09:12:53.572148  371983 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 09:12:53.572890  371983 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 09:12:53.573124  371983 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 09:12:53.573134  371983 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 09:12:53.573127  371983 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1025 09:12:53.573526  371983 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:12:53.573544  371983 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:12:53.573140  371983 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 09:12:53.573248  371983 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:12:53.573677  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 09:12:53.573254  371983 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:12:53.573857  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 09:12:53.573276  371983 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 09:12:53.573971  371983 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 09:12:53.573981  371983 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 09:12:53.574029  371983 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:12:53.574044  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 09:12:53.574613  371983 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 09:12:53.575217  371983 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:12:53.575880  371983 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 09:12:53.576445  371983 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 09:12:53.576489  371983 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1025 09:12:53.576543  371983 out.go:179]   - Using image docker.io/busybox:stable
	I1025 09:12:53.577919  371983 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 09:12:53.578073  371983 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:12:53.577979  371983 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 09:12:53.578126  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 09:12:53.577923  371983 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:12:53.578096  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 09:12:53.579689  371983 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1025 09:12:53.579804  371983 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:12:53.579980  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 09:12:53.580249  371983 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 09:12:53.580580  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.581637  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.581828  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.582089  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.584042  371983 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 09:12:53.584146  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.584182  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.584509  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.584603  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.584639  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.584836  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.584863  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.584869  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.585210  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.586059  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.586137  371983 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1025 09:12:53.586163  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1025 09:12:53.586668  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.586850  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.586918  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.587157  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.587402  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.587788  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.587805  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.588674  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.588747  371983 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 09:12:53.588753  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.589227  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.589435  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.589786  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.589826  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.589866  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.589959  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.589982  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.590496  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.590851  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.590882  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.590883  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.590925  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.590780  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.590985  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.591123  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.591319  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.591382  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.591632  371983 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 09:12:53.591953  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.592333  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.592450  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.592668  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.592702  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.593085  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.593113  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.593093  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.593214  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.593330  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.593410  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.593439  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.593694  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.593954  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.594028  371983 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 09:12:53.594034  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.594315  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.594775  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.595217  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.595249  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.595255  371983 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 09:12:53.595270  371983 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 09:12:53.595459  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:53.597794  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.598142  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:12:53.598166  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:12:53.598324  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:12:54.409196  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 09:12:54.426198  371983 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 09:12:54.426245  371983 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 09:12:54.435525  371983 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 09:12:54.435552  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 09:12:54.482787  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1025 09:12:54.496368  371983 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:12:54.496457  371983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:12:54.510369  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:12:54.517417  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:12:54.588232  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:12:54.624154  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:12:54.662814  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:12:54.666044  371983 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 09:12:54.666078  371983 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 09:12:54.725960  371983 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 09:12:54.725995  371983 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 09:12:54.771314  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:12:54.868305  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:12:54.960262  371983 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:12:54.960304  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 09:12:55.104999  371983 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 09:12:55.105026  371983 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 09:12:55.163418  371983 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:12:55.163441  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 09:12:55.192908  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:12:55.308961  371983 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 09:12:55.308994  371983 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 09:12:55.476912  371983 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 09:12:55.476948  371983 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 09:12:55.509865  371983 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 09:12:55.509904  371983 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 09:12:55.828633  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:12:55.850859  371983 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 09:12:55.850902  371983 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 09:12:55.883391  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:12:55.941828  371983 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:12:55.941868  371983 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 09:12:55.968657  371983 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 09:12:55.968694  371983 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 09:12:56.004808  371983 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 09:12:56.004849  371983 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 09:12:56.315322  371983 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:12:56.315360  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 09:12:56.351359  371983 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 09:12:56.351394  371983 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 09:12:56.408337  371983 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 09:12:56.408379  371983 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 09:12:56.443256  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:12:56.560939  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:12:56.659371  371983 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 09:12:56.659405  371983 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 09:12:56.762596  371983 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:12:56.762635  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 09:12:56.888411  371983 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 09:12:56.888441  371983 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 09:12:56.966738  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:12:57.283763  371983 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 09:12:57.283797  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 09:12:57.596792  371983 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 09:12:57.596818  371983 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 09:12:58.000914  371983 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 09:12:58.000956  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 09:12:58.403217  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.993970122s)
	I1025 09:12:58.591976  371983 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 09:12:58.592009  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 09:12:58.911268  371983 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:12:58.911297  371983 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 09:12:59.113790  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:13:01.051204  371983 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 09:13:01.053866  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:13:01.054286  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:13:01.054315  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:13:01.054488  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:13:01.698585  371983 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 09:13:01.938154  371983 addons.go:238] Setting addon gcp-auth=true in "addons-442185"
	I1025 09:13:01.938240  371983 host.go:66] Checking if "addons-442185" exists ...
	I1025 09:13:01.940116  371983 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 09:13:01.942605  371983 main.go:141] libmachine: domain addons-442185 has defined MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:13:01.942983  371983 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:69:a7", ip: ""} in network mk-addons-442185: {Iface:virbr1 ExpiryTime:2025-10-25 10:12:22 +0000 UTC Type:0 Mac:52:54:00:70:69:a7 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:addons-442185 Clientid:01:52:54:00:70:69:a7}
	I1025 09:13:01.943012  371983 main.go:141] libmachine: domain addons-442185 has defined IP address 192.168.39.30 and MAC address 52:54:00:70:69:a7 in network mk-addons-442185
	I1025 09:13:01.943199  371983 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/addons-442185/id_rsa Username:docker}
	I1025 09:13:08.423378  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.940549866s)
	I1025 09:13:08.423397  371983 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (13.926898715s)
	I1025 09:13:08.423432  371983 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (13.927024236s)
	I1025 09:13:08.423467  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (13.91305927s)
	I1025 09:13:08.423502  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (13.906056243s)
	I1025 09:13:08.423434  371983 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1025 09:13:08.423542  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.835276239s)
	I1025 09:13:08.423585  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (13.799400876s)
	I1025 09:13:08.423631  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (13.760793185s)
	I1025 09:13:08.423670  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.652328832s)
	I1025 09:13:08.423721  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.555389061s)
	I1025 09:13:08.423826  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.230892757s)
	I1025 09:13:08.423844  371983 addons.go:479] Verifying addon ingress=true in "addons-442185"
	I1025 09:13:08.423965  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.540551259s)
	I1025 09:13:08.424002  371983 addons.go:479] Verifying addon registry=true in "addons-442185"
	I1025 09:13:08.423931  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (12.595263919s)
	I1025 09:13:08.424093  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.980797377s)
	I1025 09:13:08.424114  371983 addons.go:479] Verifying addon metrics-server=true in "addons-442185"
	I1025 09:13:08.424115  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.863146443s)
	I1025 09:13:08.424218  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.457448097s)
	W1025 09:13:08.424921  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:13:08.424942  371983 retry.go:31] will retry after 284.070833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	W1025 09:13:08.424072  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:08.424983  371983 retry.go:31] will retry after 128.376996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:08.424362  371983 node_ready.go:35] waiting up to 6m0s for node "addons-442185" to be "Ready" ...
	I1025 09:13:08.424408  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.31058963s)
	I1025 09:13:08.425121  371983 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-442185"
	I1025 09:13:08.424440  371983 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.48430325s)
	I1025 09:13:08.426802  371983 out.go:179] * Verifying registry addon...
	I1025 09:13:08.426809  371983 out.go:179] * Verifying ingress addon...
	I1025 09:13:08.426802  371983 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-442185 service yakd-dashboard -n yakd-dashboard
	
	I1025 09:13:08.427567  371983 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 09:13:08.427588  371983 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:13:08.428852  371983 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 09:13:08.428944  371983 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 09:13:08.429547  371983 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 09:13:08.430130  371983 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 09:13:08.431169  371983 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 09:13:08.431207  371983 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 09:13:08.516410  371983 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 09:13:08.516447  371983 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 09:13:08.531409  371983 node_ready.go:49] node "addons-442185" is "Ready"
	I1025 09:13:08.531463  371983 node_ready.go:38] duration metric: took 106.448747ms for node "addons-442185" to be "Ready" ...
	I1025 09:13:08.531497  371983 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:13:08.531560  371983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:13:08.554232  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 09:13:08.618790  371983 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1025 09:13:08.626764  371983 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:13:08.626791  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:08.627635  371983 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:13:08.627663  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:08.627641  371983 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:13:08.627680  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:08.709267  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:13:08.710496  371983 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:13:08.710520  371983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 09:13:08.842950  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:13:09.038028  371983 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-442185" context rescaled to 1 replicas
	I1025 09:13:09.103404  371983 api_server.go:72] duration metric: took 15.543719208s to wait for apiserver process to appear ...
	I1025 09:13:09.103437  371983 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:13:09.103462  371983 api_server.go:253] Checking apiserver healthz at https://192.168.39.30:8443/healthz ...
	I1025 09:13:09.163093  371983 api_server.go:279] https://192.168.39.30:8443/healthz returned 200:
	ok
	I1025 09:13:09.178675  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:09.178689  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:09.178895  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:09.205402  371983 api_server.go:141] control plane version: v1.34.1
	I1025 09:13:09.205444  371983 api_server.go:131] duration metric: took 101.998422ms to wait for apiserver health ...
	I1025 09:13:09.205458  371983 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:13:09.268101  371983 system_pods.go:59] 20 kube-system pods found
	I1025 09:13:09.268178  371983 system_pods.go:61] "amd-gpu-device-plugin-b27h4" [6fe59733-87d3-4f8d-943c-639180f87982] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:13:09.268211  371983 system_pods.go:61] "coredns-66bc5c9577-6r8k9" [00643cc6-21d9-4c41-8b7b-5d4039ce8368] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:13:09.268231  371983 system_pods.go:61] "coredns-66bc5c9577-cjwgv" [561a8fef-19db-4928-935c-4e50ac165a83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:13:09.268242  371983 system_pods.go:61] "csi-hostpath-attacher-0" [98d1e859-c605-4fe1-b6ec-3058aaec8e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:13:09.268255  371983 system_pods.go:61] "csi-hostpath-resizer-0" [67aa195c-2aba-43af-9e98-3c20ddf0b100] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:13:09.268276  371983 system_pods.go:61] "csi-hostpathplugin-cx4q4" [e002ab51-fe47-484d-ada4-339ab856f498] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:13:09.268286  371983 system_pods.go:61] "etcd-addons-442185" [c0f22587-d7e2-46fb-b2ab-207c9caff117] Running
	I1025 09:13:09.268293  371983 system_pods.go:61] "kube-apiserver-addons-442185" [426c795f-31a9-4feb-be52-03f4482bcf30] Running
	I1025 09:13:09.268304  371983 system_pods.go:61] "kube-controller-manager-addons-442185" [47e2cf50-f6b0-49d6-ad57-31f86708aa5e] Running
	I1025 09:13:09.268313  371983 system_pods.go:61] "kube-ingress-dns-minikube" [6bf66784-b533-414b-b4c7-1a207297fef5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:13:09.268322  371983 system_pods.go:61] "kube-proxy-cx6mj" [ec6625f3-5a95-4d1f-9e48-6f3c80eafef8] Running
	I1025 09:13:09.268328  371983 system_pods.go:61] "kube-scheduler-addons-442185" [43064871-7d91-49b9-b3c3-6425dcbef9a5] Running
	I1025 09:13:09.268336  371983 system_pods.go:61] "metrics-server-85b7d694d7-bfhsw" [fed32d2b-9d1b-420c-97bb-ab8a81af5ab0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:13:09.268344  371983 system_pods.go:61] "nvidia-device-plugin-daemonset-t9l94" [f10e3f67-7921-4e3b-ab1b-0b86e4475c8d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:13:09.268364  371983 system_pods.go:61] "registry-6b586f9694-jmdnx" [113bb8bd-ad11-4695-97d8-f5f7fca0a88f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:13:09.268373  371983 system_pods.go:61] "registry-creds-764b6fb674-qdzmh" [ee7e398e-5cce-45cb-80dd-685b817d9b9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:13:09.268391  371983 system_pods.go:61] "registry-proxy-c2qpq" [ad98b74f-93c3-4aec-9f8d-d9bb38aa1400] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:13:09.268400  371983 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mgvpl" [7e4ea92e-95c1-467f-acf4-9f56ec942e73] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:13:09.268412  371983 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sc5cq" [e255a4a5-0db7-4665-a965-015cdb32983f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:13:09.268421  371983 system_pods.go:61] "storage-provisioner" [aad0968e-8a05-47c4-83c5-cc6fdeeb884a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:13:09.268432  371983 system_pods.go:74] duration metric: took 62.966935ms to wait for pod list to return data ...
	I1025 09:13:09.268447  371983 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:13:09.348640  371983 default_sa.go:45] found service account: "default"
	I1025 09:13:09.348669  371983 default_sa.go:55] duration metric: took 80.213478ms for default service account to be created ...
	I1025 09:13:09.348681  371983 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:13:09.380678  371983 system_pods.go:86] 20 kube-system pods found
	I1025 09:13:09.380717  371983 system_pods.go:89] "amd-gpu-device-plugin-b27h4" [6fe59733-87d3-4f8d-943c-639180f87982] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:13:09.380725  371983 system_pods.go:89] "coredns-66bc5c9577-6r8k9" [00643cc6-21d9-4c41-8b7b-5d4039ce8368] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:13:09.380735  371983 system_pods.go:89] "coredns-66bc5c9577-cjwgv" [561a8fef-19db-4928-935c-4e50ac165a83] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:13:09.380752  371983 system_pods.go:89] "csi-hostpath-attacher-0" [98d1e859-c605-4fe1-b6ec-3058aaec8e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:13:09.380760  371983 system_pods.go:89] "csi-hostpath-resizer-0" [67aa195c-2aba-43af-9e98-3c20ddf0b100] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:13:09.380777  371983 system_pods.go:89] "csi-hostpathplugin-cx4q4" [e002ab51-fe47-484d-ada4-339ab856f498] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:13:09.380784  371983 system_pods.go:89] "etcd-addons-442185" [c0f22587-d7e2-46fb-b2ab-207c9caff117] Running
	I1025 09:13:09.380792  371983 system_pods.go:89] "kube-apiserver-addons-442185" [426c795f-31a9-4feb-be52-03f4482bcf30] Running
	I1025 09:13:09.380797  371983 system_pods.go:89] "kube-controller-manager-addons-442185" [47e2cf50-f6b0-49d6-ad57-31f86708aa5e] Running
	I1025 09:13:09.380809  371983 system_pods.go:89] "kube-ingress-dns-minikube" [6bf66784-b533-414b-b4c7-1a207297fef5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:13:09.380818  371983 system_pods.go:89] "kube-proxy-cx6mj" [ec6625f3-5a95-4d1f-9e48-6f3c80eafef8] Running
	I1025 09:13:09.380824  371983 system_pods.go:89] "kube-scheduler-addons-442185" [43064871-7d91-49b9-b3c3-6425dcbef9a5] Running
	I1025 09:13:09.380830  371983 system_pods.go:89] "metrics-server-85b7d694d7-bfhsw" [fed32d2b-9d1b-420c-97bb-ab8a81af5ab0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:13:09.380836  371983 system_pods.go:89] "nvidia-device-plugin-daemonset-t9l94" [f10e3f67-7921-4e3b-ab1b-0b86e4475c8d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:13:09.380844  371983 system_pods.go:89] "registry-6b586f9694-jmdnx" [113bb8bd-ad11-4695-97d8-f5f7fca0a88f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:13:09.380849  371983 system_pods.go:89] "registry-creds-764b6fb674-qdzmh" [ee7e398e-5cce-45cb-80dd-685b817d9b9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:13:09.380859  371983 system_pods.go:89] "registry-proxy-c2qpq" [ad98b74f-93c3-4aec-9f8d-d9bb38aa1400] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:13:09.380864  371983 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgvpl" [7e4ea92e-95c1-467f-acf4-9f56ec942e73] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:13:09.380876  371983 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sc5cq" [e255a4a5-0db7-4665-a965-015cdb32983f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:13:09.380883  371983 system_pods.go:89] "storage-provisioner" [aad0968e-8a05-47c4-83c5-cc6fdeeb884a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:13:09.380914  371983 system_pods.go:126] duration metric: took 32.214595ms to wait for k8s-apps to be running ...
	I1025 09:13:09.380930  371983 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:13:09.380992  371983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:13:09.452163  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:09.452428  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:09.453780  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:09.935207  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:09.939457  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:09.939471  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:10.436144  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:10.436232  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:10.437748  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:10.936692  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:10.939540  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:10.940359  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:11.453565  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:11.479751  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:11.479998  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:11.936879  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:11.940177  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:11.940393  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:12.277569  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.723274839s)
	W1025 09:13:12.277616  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:12.277646  371983 retry.go:31] will retry after 211.143661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:12.277687  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.568369104s)
	I1025 09:13:12.277718  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.434740926s)
	I1025 09:13:12.277757  371983 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.896750785s)
	I1025 09:13:12.277778  371983 system_svc.go:56] duration metric: took 2.896844395s WaitForService to wait for kubelet
	I1025 09:13:12.277791  371983 kubeadm.go:586] duration metric: took 18.718113747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:13:12.277818  371983 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:13:12.278787  371983 addons.go:479] Verifying addon gcp-auth=true in "addons-442185"
	I1025 09:13:12.280316  371983 out.go:179] * Verifying gcp-auth addon...
	I1025 09:13:12.282346  371983 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 09:13:12.284316  371983 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 09:13:12.284346  371983 node_conditions.go:123] node cpu capacity is 2
	I1025 09:13:12.284362  371983 node_conditions.go:105] duration metric: took 6.536732ms to run NodePressure ...
	I1025 09:13:12.284375  371983 start.go:241] waiting for startup goroutines ...
	I1025 09:13:12.289651  371983 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 09:13:12.289673  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:12.438254  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:12.440012  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:12.489214  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:13:12.536519  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:12.787591  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:12.934338  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:12.934967  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:12.937096  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:13.286175  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:13.434273  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:13.434651  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:13.439413  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:13.638759  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.149500883s)
	W1025 09:13:13.638799  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:13.638828  371983 retry.go:31] will retry after 754.446147ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:13.787389  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:13.937768  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:13.942877  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:13.943061  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:14.287698  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:14.393974  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:13:14.440144  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:14.440945  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:14.442448  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:14.785697  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:14.936007  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:14.937069  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:14.937624  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:15.287476  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:15.436096  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:15.436100  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:15.436133  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:15.474354  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.080328202s)
	W1025 09:13:15.474398  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:15.474425  371983 retry.go:31] will retry after 557.957768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:15.786752  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:15.935748  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:15.938384  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:15.939395  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:16.032661  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:13:16.289562  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:16.434849  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:16.439345  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:16.439987  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:16.788364  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:16.934949  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:16.935605  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:16.938947  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:13:16.991458  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:16.991500  371983 retry.go:31] will retry after 1.187160764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:17.286246  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:17.434842  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:17.435147  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:17.435474  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:17.790714  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:17.933698  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:17.934149  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:17.934872  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:18.179253  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:13:18.287812  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:18.433835  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:18.434642  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:18.435562  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:18.787676  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:19.110717  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:19.110717  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:19.112948  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:19.288514  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:19.304962  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.125657131s)
	W1025 09:13:19.305020  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:19.305049  371983 retry.go:31] will retry after 2.24952567s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:19.435275  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:19.435548  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:19.435661  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:19.786992  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:19.934384  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:19.934555  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:19.935158  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:20.287407  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:20.433602  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:20.435834  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:20.436763  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:20.786928  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:20.932671  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:20.932916  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:20.934085  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:21.287068  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:21.434058  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:21.435500  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:21.435862  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:21.555112  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:13:21.796028  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:21.935337  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:21.935756  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:21.936711  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:22.286546  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:22.437675  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:22.438165  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:22.439331  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:13:22.443863  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:22.443903  371983 retry.go:31] will retry after 4.073033546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:22.791235  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:22.951812  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:22.951980  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:22.951991  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:23.286471  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:23.434971  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:23.435005  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:23.435266  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:23.786178  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:23.937127  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:23.937666  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:23.937729  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:24.286686  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:24.435796  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:24.435869  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:24.437644  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:24.786669  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:24.935354  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:24.935753  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:24.936499  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:25.285821  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:25.432641  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:25.432858  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:25.433101  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:25.788546  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:25.934851  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:25.934883  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:25.935521  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:26.289612  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:26.434895  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:26.435143  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:26.435861  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:26.518112  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:13:26.786516  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:26.934131  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:26.939062  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:26.939092  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:27.288039  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:13:27.383118  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:27.383154  371983 retry.go:31] will retry after 5.221115939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:27.436178  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:27.437241  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:27.437762  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:27.788073  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:27.957316  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:28.057264  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:28.057610  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:28.285884  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:28.434260  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:28.434379  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:28.434417  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:28.789964  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:28.933025  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:28.933138  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:28.934327  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:29.292168  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:29.436770  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:29.437069  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:29.437168  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:29.785429  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:29.936245  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:29.936837  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:29.937356  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:30.285810  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:30.434337  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:30.434347  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:30.436140  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:30.786504  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:30.933490  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:30.933712  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:30.935337  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:31.286145  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:31.433050  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:31.433166  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:31.434449  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:31.794866  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:31.934466  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:31.934851  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:31.936594  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:32.287704  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:32.434342  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:32.434565  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:32.435725  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:32.604876  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:13:32.785979  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:32.933071  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:32.938926  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:32.938968  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:33.287542  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:33.435908  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:33.439766  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:33.440290  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:33.669326  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.06439171s)
	W1025 09:13:33.669374  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:33.669399  371983 retry.go:31] will retry after 6.044482252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:33.786440  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:33.934109  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:33.934163  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:33.935702  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:34.286937  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:34.433073  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:34.434787  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:34.434938  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:34.790025  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:34.934420  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:34.934600  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:34.935853  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:35.288060  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:35.437417  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:35.444088  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:35.445179  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:35.786038  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:35.934091  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:35.934440  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:35.934587  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:36.288020  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:36.442291  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:36.444163  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:36.444180  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:36.790048  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:36.936037  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:36.938852  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:36.939274  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:37.287166  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:37.432771  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:37.434145  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:37.435921  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:37.788043  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:37.935711  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:37.935887  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:37.936023  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:38.289373  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:38.434760  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:38.435344  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:38.436048  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:38.787083  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:38.932809  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:38.938256  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:38.938646  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:39.286715  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:39.435387  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:39.436148  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:39.436515  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:39.714855  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:13:39.787245  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:39.942054  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:39.945468  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:39.945593  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:40.314849  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:40.434531  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:40.434586  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:40.436493  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:40.787955  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:40.856162  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.141252291s)
	W1025 09:13:40.856229  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:40.856256  371983 retry.go:31] will retry after 7.149991316s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:40.938199  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:40.938421  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:40.940234  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:41.286627  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:41.433724  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:41.434609  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:41.435842  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:41.787021  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:41.934483  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:41.934578  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:41.934589  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:42.286337  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:42.433816  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:42.434357  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:42.434858  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:42.786531  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:42.934464  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:42.935609  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:42.935638  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:43.285684  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:43.434884  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:43.435367  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:43.436590  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:43.786457  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:43.934093  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:43.935582  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:43.935950  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:44.288406  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:44.437757  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:44.437969  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:44.438074  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:44.788105  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:44.936436  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:44.937951  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:44.940296  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:45.286890  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:45.434980  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:45.435152  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:45.436215  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:45.785914  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:45.932702  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:45.933917  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:45.935431  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:46.285745  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:46.436017  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:46.436053  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:46.436874  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:46.788837  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:46.937784  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:46.938241  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:46.939991  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:47.285262  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:47.435391  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:47.435547  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:47.435557  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:47.789011  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:47.933711  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:47.934711  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:47.935985  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:48.007299  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:13:48.287380  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:48.439647  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:48.439735  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:48.441910  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:48.789697  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:48.935528  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:48.936329  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:48.937833  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:49.051386  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.044031798s)
	W1025 09:13:49.051464  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:49.051494  371983 retry.go:31] will retry after 17.713909065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:13:49.288649  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:49.434968  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:13:49.435268  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:49.436800  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:49.786215  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:49.935268  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:49.935967  371983 kapi.go:107] duration metric: took 41.50711393s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 09:13:49.936007  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:50.288865  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:50.440022  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:50.440298  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:50.786621  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:50.936375  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:50.936442  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:51.285441  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:51.433859  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:51.435171  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:51.786259  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:51.932854  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:51.934459  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:52.285858  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:52.653407  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:52.654341  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:52.788453  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:52.934389  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:52.935100  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:53.287622  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:53.435813  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:53.436804  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:53.789155  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:53.933883  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:53.933972  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:54.287338  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:54.433392  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:54.434818  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:54.841402  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:54.935153  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:54.935303  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:55.287021  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:55.433103  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:55.434125  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:55.786961  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:55.934120  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:55.934571  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:56.285529  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:56.434868  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:56.435630  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:56.786773  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:57.286387  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:57.288809  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:57.289262  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:57.435208  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:57.435573  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:57.789073  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:57.935828  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:57.936380  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:58.286376  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:58.433649  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:58.435331  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:58.789333  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:58.934347  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:58.934531  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:59.286373  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:59.434928  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:59.435023  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:13:59.943257  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:13:59.943568  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:13:59.943834  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:00.287518  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:00.435503  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:00.435632  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:00.793027  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:00.936181  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:00.941081  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:01.287606  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:01.435748  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:01.437645  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:01.794486  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:01.936175  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:01.937351  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:02.288677  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:02.434541  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:02.434914  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:02.786780  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:02.934159  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:02.934541  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:03.286011  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:03.434141  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:03.434819  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:03.787028  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:03.934675  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:03.939100  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:04.287528  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:04.435059  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:04.436407  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:04.786615  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:04.933560  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:04.934297  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:05.286849  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:05.434575  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:05.434634  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:05.786425  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:05.933759  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:05.935058  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:06.289158  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:06.434798  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:06.437213  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:06.766677  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:14:06.787438  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:06.934583  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:06.934583  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:07.287677  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:07.437787  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:07.441353  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:07.789619  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:07.841369  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.074640212s)
	W1025 09:14:07.841420  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:14:07.841448  371983 retry.go:31] will retry after 13.512042754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:14:07.935363  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:07.935684  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:08.287828  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:08.437272  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:08.437677  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:08.787208  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:08.934413  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:08.935536  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:09.288575  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:09.444161  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:09.444463  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:09.790503  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:09.933524  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:09.933832  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:10.286713  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:10.433728  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:10.433811  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:10.787429  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:10.934224  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:10.935210  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:11.288343  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:11.438206  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:11.441049  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:11.786320  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:11.933670  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:11.933950  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:12.286425  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:12.434439  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:12.436100  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:12.787716  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:12.934301  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:12.934987  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:13.287502  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:13.435016  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:13.435132  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:13.786532  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:13.934685  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:13.936408  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:14.287718  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:14.435200  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:14.435393  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:14.911233  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:14.933329  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:14.936702  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:15.295607  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:15.439484  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:15.441127  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:15.787849  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:15.935234  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:15.936228  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:16.286980  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:16.432913  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:16.434257  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:16.792415  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:16.939813  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:16.940961  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:17.287044  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:17.432737  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:17.435757  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:17.788204  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:17.933929  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:17.934328  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:18.286043  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:18.434682  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:18.436276  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:18.787020  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:18.933022  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:18.935379  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:19.286446  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:19.434658  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:19.435366  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:19.789003  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:19.934315  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:19.934653  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:20.286527  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:20.452042  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:20.452069  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:20.842018  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:20.934037  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:20.935666  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:21.286676  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:21.353766  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:14:21.434355  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:21.434811  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:21.789779  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:21.936537  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:21.937102  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:22.289004  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:22.437290  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:22.439198  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:22.517620  371983 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.163802997s)
	W1025 09:14:22.517677  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:14:22.517720  371983 retry.go:31] will retry after 18.221932218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:14:22.789049  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:22.934657  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:22.935898  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:23.361836  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:23.466145  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:23.467152  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:23.787084  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:23.933364  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:23.934068  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:24.286648  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:24.434431  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:24.434600  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:24.788650  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:24.933799  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:24.933990  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:25.287071  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:25.433942  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:25.434460  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:25.786644  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:25.935256  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:25.937313  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:26.289457  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:26.433119  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:26.433279  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:26.785306  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:26.935263  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:26.935500  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:27.287022  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:27.435702  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:27.437790  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:27.788209  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:28.129469  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:28.129837  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:28.295196  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:28.439006  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:28.440635  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:28.788971  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:28.935830  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:28.935901  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:29.287846  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:29.435483  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:29.437110  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:29.787737  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:29.940039  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:29.940057  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:30.288863  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:30.441406  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:30.442326  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:30.786573  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:30.934384  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:30.934495  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:31.288447  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:31.437959  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:31.438619  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:31.786202  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:31.939148  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:31.940710  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:32.287657  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:32.436221  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:32.438762  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:32.786247  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:32.942640  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:32.942821  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:33.287174  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:33.434683  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:33.434810  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:33.788363  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:33.933933  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:33.934668  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:34.315396  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:34.436214  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:34.436371  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:34.790908  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:34.934024  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:34.935039  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:35.288997  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:35.435787  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:35.436349  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:35.866340  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:35.967387  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:35.968110  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:14:36.287337  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:36.437050  371983 kapi.go:107] duration metric: took 1m28.007495123s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 09:14:36.441491  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:36.785908  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:36.932351  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:37.286310  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:37.433218  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:37.786262  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:37.933567  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:38.285622  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:38.433761  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:38.786386  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:38.933010  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:39.286162  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:39.432678  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:39.789004  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:39.932443  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:40.285595  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:40.433478  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:40.740773  371983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:14:40.786667  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:40.932898  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:41.287703  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:41.433219  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:14:41.453119  371983 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:14:41.453297  371983 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:14:41.786361  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:41.933085  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:42.286235  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:42.432390  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:42.785650  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:42.933397  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:43.285383  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:43.432849  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:43.786749  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:43.933157  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:44.286634  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:44.433168  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:44.787248  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:44.932885  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:45.286408  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:45.433177  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:45.786340  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:45.933125  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:46.286832  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:46.433428  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:46.785767  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:46.933603  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:47.286483  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:47.433373  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:47.786839  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:47.933708  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:48.286443  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:48.433834  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:48.786167  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:48.933308  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:49.285346  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:49.432804  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:49.787621  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:49.933440  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:50.286035  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:50.433448  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:50.785837  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:50.936958  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:51.287041  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:51.432638  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:51.786733  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:51.934160  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:52.285868  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:52.433236  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:52.787490  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:52.933623  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:53.285878  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:53.433331  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:53.786365  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:53.932612  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:54.286151  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:54.432753  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:54.786097  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:54.932587  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:55.285763  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:55.433355  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:55.785847  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:55.933004  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:56.286956  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:56.433494  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:56.787159  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:56.932922  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:57.286272  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:57.432988  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:57.786556  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:57.934436  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:58.285687  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:58.434102  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:58.787416  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:58.933659  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:59.286134  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:59.432412  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:14:59.785709  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:14:59.933006  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:00.286999  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:00.432985  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:00.787302  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:00.933484  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:01.286179  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:01.433863  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:01.787207  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:01.934126  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:02.286747  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:02.433275  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:02.786492  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:02.933589  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:03.285917  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:03.434657  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:03.786149  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:03.933891  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:04.287449  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:04.433108  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:04.787381  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:04.933011  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:05.287243  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:05.433405  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:05.785680  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:05.934461  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:06.286109  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:06.433587  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:06.785887  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:06.932737  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:07.286978  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:07.433527  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:07.786492  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:07.933610  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:08.287067  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:08.432822  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:08.786312  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:08.934822  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:09.286150  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:09.434429  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:09.786126  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:09.933594  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:10.286298  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:10.433508  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:10.785830  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:10.935935  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:11.287014  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:11.432840  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:11.786426  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:11.933127  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:12.285934  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:12.432410  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:12.786640  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:12.933799  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:13.286359  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:13.432896  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:13.786960  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:13.933430  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:14.285764  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:14.433110  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:14.787237  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:14.932862  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:15.286340  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:15.433267  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:15.785237  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:15.933915  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:16.286123  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:16.432262  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:16.786118  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:16.933841  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:17.293486  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:17.438156  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:17.793327  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:17.936985  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:18.286286  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:18.433931  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:18.787035  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:18.934112  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:19.299844  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:19.439079  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:19.788221  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:19.934643  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:20.288123  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:20.433889  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:20.787464  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:20.933250  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:21.287683  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:21.434182  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:21.808574  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:21.939342  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:22.292223  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:22.434637  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:22.786771  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:22.934012  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:23.288020  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:23.434773  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:23.793010  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:23.934507  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:24.287119  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:24.434322  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:24.787137  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:24.934108  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:25.286470  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:25.437235  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:25.788823  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:25.933410  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:26.289870  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:26.629350  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:26.789065  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:26.933672  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:27.287451  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:27.432914  371983 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:15:27.786944  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:27.933349  371983 kapi.go:107] duration metric: took 2m19.504398591s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 09:15:28.286451  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:28.786692  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:29.286819  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:29.788752  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:30.288851  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:30.786245  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:31.287844  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:31.786049  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:32.286592  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:32.787163  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:33.287115  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:33.786655  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:34.287885  371983 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:15:34.871382  371983 kapi.go:107] duration metric: took 2m22.589030863s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 09:15:34.873013  371983 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-442185 cluster.
	I1025 09:15:34.874199  371983 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 09:15:34.875345  371983 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 09:15:34.876558  371983 out.go:179] * Enabled addons: cloud-spanner, volcano, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, registry-creds, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1025 09:15:34.877507  371983 addons.go:514] duration metric: took 2m41.317799981s for enable addons: enabled=[cloud-spanner volcano amd-gpu-device-plugin nvidia-device-plugin storage-provisioner registry-creds ingress-dns metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1025 09:15:34.877549  371983 start.go:246] waiting for cluster config update ...
	I1025 09:15:34.877569  371983 start.go:255] writing updated cluster config ...
	I1025 09:15:34.877878  371983 ssh_runner.go:195] Run: rm -f paused
	I1025 09:15:34.883944  371983 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:15:34.888253  371983 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cjwgv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:34.895249  371983 pod_ready.go:94] pod "coredns-66bc5c9577-cjwgv" is "Ready"
	I1025 09:15:34.895278  371983 pod_ready.go:86] duration metric: took 6.998883ms for pod "coredns-66bc5c9577-cjwgv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:34.971410  371983 pod_ready.go:83] waiting for pod "etcd-addons-442185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:34.976707  371983 pod_ready.go:94] pod "etcd-addons-442185" is "Ready"
	I1025 09:15:34.976733  371983 pod_ready.go:86] duration metric: took 5.29758ms for pod "etcd-addons-442185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:34.979129  371983 pod_ready.go:83] waiting for pod "kube-apiserver-addons-442185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:34.984588  371983 pod_ready.go:94] pod "kube-apiserver-addons-442185" is "Ready"
	I1025 09:15:34.984607  371983 pod_ready.go:86] duration metric: took 5.458376ms for pod "kube-apiserver-addons-442185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:34.986787  371983 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-442185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:35.288649  371983 pod_ready.go:94] pod "kube-controller-manager-addons-442185" is "Ready"
	I1025 09:15:35.288677  371983 pod_ready.go:86] duration metric: took 301.870882ms for pod "kube-controller-manager-addons-442185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:35.488510  371983 pod_ready.go:83] waiting for pod "kube-proxy-cx6mj" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:35.888369  371983 pod_ready.go:94] pod "kube-proxy-cx6mj" is "Ready"
	I1025 09:15:35.888417  371983 pod_ready.go:86] duration metric: took 399.876831ms for pod "kube-proxy-cx6mj" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:36.089756  371983 pod_ready.go:83] waiting for pod "kube-scheduler-addons-442185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:36.488129  371983 pod_ready.go:94] pod "kube-scheduler-addons-442185" is "Ready"
	I1025 09:15:36.488175  371983 pod_ready.go:86] duration metric: took 398.387075ms for pod "kube-scheduler-addons-442185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:36.488209  371983 pod_ready.go:40] duration metric: took 1.604229188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:15:36.535177  371983 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:15:36.537004  371983 out.go:179] * Done! kubectl is now configured to use "addons-442185" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 25 09:17:45 addons-442185 dockerd[1528]: time="2025-10-25T09:17:45.184008138Z" level=info msg="ignoring event" container=7b9ac5f4b1c9d664a860aa9b51e0c67e7d12ec54c0ba7fd186f4940a91f4b2d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:17:45 addons-442185 dockerd[1528]: time="2025-10-25T09:17:45.198457015Z" level=info msg="ignoring event" container=a07a68c4ba425f7b1f5edd72452eb9b2fa65033b5fc00d2da4ae285a57bd0825 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:17:45 addons-442185 dockerd[1528]: time="2025-10-25T09:17:45.338344427Z" level=info msg="ignoring event" container=e28131cca88db68f771a31a17fb6936b4a5d66724a0a9ff869713edb1f80fa9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:17:45 addons-442185 dockerd[1528]: time="2025-10-25T09:17:45.552396861Z" level=info msg="ignoring event" container=1b58ca0259ce69f93d25060e6d4ce26f193d1c5c16f72d1f3703e5dcb14771c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:17:45 addons-442185 dockerd[1528]: time="2025-10-25T09:17:45.564152729Z" level=info msg="ignoring event" container=616bb00c3ab5a2b5c46ed3c015b652b036667473fd80ed07a5f553954adcfd8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:18:21 addons-442185 dockerd[1528]: time="2025-10-25T09:18:21.927061296Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 25 09:18:22 addons-442185 dockerd[1528]: time="2025-10-25T09:18:22.411014185Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:18:33 addons-442185 dockerd[1528]: time="2025-10-25T09:18:33.008603177Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:18:33 addons-442185 cri-dockerd[1393]: time="2025-10-25T09:18:33Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: 1.0: Pulling from kicbase/echo-server"
	Oct 25 09:18:59 addons-442185 dockerd[1528]: time="2025-10-25T09:18:59.998600590Z" level=info msg="ignoring event" container=6ae435f30833bbbcfc5fd13d3e253793356653adbb43f2723467926c46c859ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:15 addons-442185 cri-dockerd[1393]: time="2025-10-25T09:19:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1827f12f758fa803eff79a72678f660d5b10b923666a13e5277711fa86429faa/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 25 09:19:15 addons-442185 dockerd[1528]: time="2025-10-25T09:19:15.739742041Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 25 09:19:16 addons-442185 dockerd[1528]: time="2025-10-25T09:19:16.224614477Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:19:31 addons-442185 dockerd[1528]: time="2025-10-25T09:19:31.931684958Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 25 09:19:32 addons-442185 dockerd[1528]: time="2025-10-25T09:19:32.414102355Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:19:59 addons-442185 dockerd[1528]: time="2025-10-25T09:19:59.933102298Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 25 09:20:00 addons-442185 dockerd[1528]: time="2025-10-25T09:20:00.411751036Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:20:06 addons-442185 dockerd[1528]: time="2025-10-25T09:20:06.698339650Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:20:41 addons-442185 dockerd[1528]: time="2025-10-25T09:20:41.926878197Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 25 09:20:42 addons-442185 dockerd[1528]: time="2025-10-25T09:20:42.416788526Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:21:15 addons-442185 dockerd[1528]: time="2025-10-25T09:21:15.595871828Z" level=info msg="ignoring event" container=1827f12f758fa803eff79a72678f660d5b10b923666a13e5277711fa86429faa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:21:45 addons-442185 cri-dockerd[1393]: time="2025-10-25T09:21:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/42bcb811466d4fb5f70f31859f95be239863bac1fd08c6a512eb4b6020c9d703/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 25 09:21:46 addons-442185 dockerd[1528]: time="2025-10-25T09:21:46.309861495Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 25 09:21:47 addons-442185 dockerd[1528]: time="2025-10-25T09:21:47.076433504Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:21:47 addons-442185 cri-dockerd[1393]: time="2025-10-25T09:21:47Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4a276d5be892       nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                                       5 minutes ago       Running             nginx                     0                   39d383bb9c1a3       nginx
	579fbf3269476       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                 5 minutes ago       Running             busybox                   0                   3d926d308552f       busybox
	484ec7b8e3509       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246              7 minutes ago       Running             local-path-provisioner    0                   0b71c4afa83ba       local-path-provisioner-648f6765c9-rzvcz
	45649a13f343d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:df0516c4c988694d65b19400d0990f129d5fd68f211cc826e7fdad55140626fd   7 minutes ago       Running             gadget                    0                   c77948bc27456       gadget-9xh8v
	d5a2df90e3c29       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                      8 minutes ago       Running             amd-gpu-device-plugin     0                   8cc16ac6b40ef       amd-gpu-device-plugin-b27h4
	f0e1d4c04b892       6e38f40d628db                                                                                                       8 minutes ago       Running             storage-provisioner       0                   8941169fbf7b2       storage-provisioner
	2ddc0bfffb443       52546a367cc9e                                                                                                       9 minutes ago       Running             coredns                   0                   1cd0ec36c0257       coredns-66bc5c9577-cjwgv
	2226161dac809       fc25172553d79                                                                                                       9 minutes ago       Running             kube-proxy                0                   a402ebc0b9944       kube-proxy-cx6mj
	2599c95df6e6c       7dd6aaa1717ab                                                                                                       9 minutes ago       Running             kube-scheduler            0                   12de7b98913f2       kube-scheduler-addons-442185
	68e60120ef91a       c3994bc696102                                                                                                       9 minutes ago       Running             kube-apiserver            0                   5adbaf3d5c2ae       kube-apiserver-addons-442185
	83fb526f941e4       5f1f5298c888d                                                                                                       9 minutes ago       Running             etcd                      0                   e0b9fe431b14b       etcd-addons-442185
	77c728f9e5519       c80c8dbafe7dd                                                                                                       9 minutes ago       Running             kube-controller-manager   0                   1ca899eb8bc7f       kube-controller-manager-addons-442185
	
	
	==> coredns [2ddc0bfffb44] <==
	[INFO] 10.244.0.26:44909 - 58698 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000279857s
	[INFO] 10.244.0.26:44909 - 60121 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000107494s
	[INFO] 10.244.0.26:46914 - 58900 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000563083s
	[INFO] 10.244.0.26:44909 - 46817 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000119696s
	[INFO] 10.244.0.26:46914 - 2998 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000332578s
	[INFO] 10.244.0.26:44909 - 56591 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000109394s
	[INFO] 10.244.0.26:44909 - 24168 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000190193s
	[INFO] 10.244.0.26:46914 - 50836 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000424865s
	[INFO] 10.244.0.26:46914 - 27092 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000220405s
	[INFO] 10.244.0.26:46914 - 26395 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000322205s
	[INFO] 10.244.0.26:46914 - 5619 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000330859s
	[INFO] 10.244.0.26:34235 - 46470 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000305336s
	[INFO] 10.244.0.26:44082 - 50468 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0008999s
	[INFO] 10.244.0.26:34235 - 10048 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109636s
	[INFO] 10.244.0.26:34235 - 52277 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000519562s
	[INFO] 10.244.0.26:44082 - 12755 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000441688s
	[INFO] 10.244.0.26:34235 - 26356 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.001496432s
	[INFO] 10.244.0.26:34235 - 18614 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000177143s
	[INFO] 10.244.0.26:34235 - 6946 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000352411s
	[INFO] 10.244.0.26:34235 - 60233 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000377589s
	[INFO] 10.244.0.26:44082 - 59798 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000109104s
	[INFO] 10.244.0.26:44082 - 49620 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000171331s
	[INFO] 10.244.0.26:44082 - 37618 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000112477s
	[INFO] 10.244.0.26:44082 - 64496 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000622934s
	[INFO] 10.244.0.26:44082 - 7228 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000124864s
	
	
	==> describe nodes <==
	Name:               addons-442185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-442185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=addons-442185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_12_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-442185
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:12:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-442185
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:21:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:17:23 +0000   Sat, 25 Oct 2025 09:12:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:17:23 +0000   Sat, 25 Oct 2025 09:12:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:17:23 +0000   Sat, 25 Oct 2025 09:12:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:17:23 +0000   Sat, 25 Oct 2025 09:12:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.30
	  Hostname:    addons-442185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8a191ff2d2244bfb68e2a9ddecda6ac
	  System UUID:                f8a191ff-2d22-44bf-b68e-2a9ddecda6ac
	  Boot ID:                    fa38f5c9-c30d-4745-a881-75a8c2e35b0a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  default                     hello-world-app-5d498dc89-f697f                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  gadget                      gadget-9xh8v                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  kube-system                 amd-gpu-device-plugin-b27h4                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m3s
	  kube-system                 coredns-66bc5c9577-cjwgv                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m7s
	  kube-system                 etcd-addons-442185                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m13s
	  kube-system                 kube-apiserver-addons-442185                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-controller-manager-addons-442185                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-proxy-cx6mj                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	  kube-system                 kube-scheduler-addons-442185                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m1s
	  local-path-storage          helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-rzvcz                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m12s  kubelet          Node addons-442185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m12s  kubelet          Node addons-442185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m12s  kubelet          Node addons-442185 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m9s   kubelet          Node addons-442185 status is now: NodeReady
	  Normal  RegisteredNode           9m8s   node-controller  Node addons-442185 event: Registered Node addons-442185 in Controller
	
	
	==> dmesg <==
	[  +5.787773] kauditd_printk_skb: 75 callbacks suppressed
	[  +5.058269] kauditd_printk_skb: 50 callbacks suppressed
	[  +4.271916] kauditd_printk_skb: 96 callbacks suppressed
	[Oct25 09:15] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.828261] kauditd_printk_skb: 107 callbacks suppressed
	[  +3.735639] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.876455] kauditd_printk_skb: 17 callbacks suppressed
	[Oct25 09:16] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.467414] kauditd_printk_skb: 5 callbacks suppressed
	[  +4.918372] kauditd_printk_skb: 65 callbacks suppressed
	[ +11.411055] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.866226] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.674625] kauditd_printk_skb: 63 callbacks suppressed
	[  +1.202335] kauditd_printk_skb: 134 callbacks suppressed
	[  +0.000017] kauditd_printk_skb: 112 callbacks suppressed
	[Oct25 09:17] kauditd_printk_skb: 187 callbacks suppressed
	[  +4.216314] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.624430] kauditd_printk_skb: 22 callbacks suppressed
	[  +8.772295] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.710458] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.644430] kauditd_printk_skb: 42 callbacks suppressed
	[  +6.120871] kauditd_printk_skb: 124 callbacks suppressed
	[Oct25 09:19] kauditd_printk_skb: 9 callbacks suppressed
	[Oct25 09:21] kauditd_printk_skb: 26 callbacks suppressed
	[ +30.371985] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [83fb526f941e] <==
	{"level":"info","ts":"2025-10-25T09:15:26.619502Z","caller":"traceutil/trace.go:172","msg":"trace[1941768051] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1400; }","duration":"201.713388ms","start":"2025-10-25T09:15:26.417777Z","end":"2025-10-25T09:15:26.619490Z","steps":["trace[1941768051] 'range keys from in-memory index tree'  (duration: 201.564937ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:15:26.619581Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.351142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:15:26.619619Z","caller":"traceutil/trace.go:172","msg":"trace[959768768] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1400; }","duration":"194.389477ms","start":"2025-10-25T09:15:26.425217Z","end":"2025-10-25T09:15:26.619606Z","steps":["trace[959768768] 'range keys from in-memory index tree'  (duration: 194.293908ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:15:26.619750Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.942984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:15:26.620614Z","caller":"traceutil/trace.go:172","msg":"trace[1435684309] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1400; }","duration":"267.804946ms","start":"2025-10-25T09:15:26.352797Z","end":"2025-10-25T09:15:26.620602Z","steps":["trace[1435684309] 'range keys from in-memory index tree'  (duration: 266.884605ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:16:02.566093Z","caller":"traceutil/trace.go:172","msg":"trace[1995330607] transaction","detail":"{read_only:false; response_revision:1514; number_of_response:1; }","duration":"309.143135ms","start":"2025-10-25T09:16:02.256936Z","end":"2025-10-25T09:16:02.566079Z","steps":["trace[1995330607] 'process raft request'  (duration: 309.040036ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:16:02.567950Z","caller":"traceutil/trace.go:172","msg":"trace[1851017420] linearizableReadLoop","detail":"{readStateIndex:1571; appliedIndex:1572; }","duration":"260.28719ms","start":"2025-10-25T09:16:02.306405Z","end":"2025-10-25T09:16:02.566693Z","steps":["trace[1851017420] 'read index received'  (duration: 260.274164ms)","trace[1851017420] 'applied index is now lower than readState.Index'  (duration: 5.804µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:16:02.568514Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"262.099431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-25T09:16:02.568859Z","caller":"traceutil/trace.go:172","msg":"trace[664552649] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1514; }","duration":"262.247598ms","start":"2025-10-25T09:16:02.306401Z","end":"2025-10-25T09:16:02.568649Z","steps":["trace[664552649] 'agreement among raft nodes before linearized reading'  (duration: 262.019887ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:16:02.569207Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"262.534415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:16:02.569229Z","caller":"traceutil/trace.go:172","msg":"trace[1582799846] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:1514; }","duration":"262.560928ms","start":"2025-10-25T09:16:02.306662Z","end":"2025-10-25T09:16:02.569223Z","steps":["trace[1582799846] 'agreement among raft nodes before linearized reading'  (duration: 262.515944ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:16:02.569478Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.204923ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:16:02.569495Z","caller":"traceutil/trace.go:172","msg":"trace[728309355] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1515; }","duration":"151.223378ms","start":"2025-10-25T09:16:02.418267Z","end":"2025-10-25T09:16:02.569490Z","steps":["trace[728309355] 'agreement among raft nodes before linearized reading'  (duration: 151.190353ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:16:02.569686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"253.447096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-10-25T09:16:02.569707Z","caller":"traceutil/trace.go:172","msg":"trace[747152856] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1515; }","duration":"253.473136ms","start":"2025-10-25T09:16:02.316228Z","end":"2025-10-25T09:16:02.569701Z","steps":["trace[747152856] 'agreement among raft nodes before linearized reading'  (duration: 253.358372ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:16:02.573539Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:16:02.256881Z","time spent":"309.502068ms","remote":"127.0.0.1:60680","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1507 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2025-10-25T09:16:45.850455Z","caller":"traceutil/trace.go:172","msg":"trace[911323350] linearizableReadLoop","detail":"{readStateIndex:1931; appliedIndex:1931; }","duration":"274.92615ms","start":"2025-10-25T09:16:45.575399Z","end":"2025-10-25T09:16:45.850326Z","steps":["trace[911323350] 'read index received'  (duration: 274.914877ms)","trace[911323350] 'applied index is now lower than readState.Index'  (duration: 6.618µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:16:45.850672Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"275.256023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:16:45.850706Z","caller":"traceutil/trace.go:172","msg":"trace[442700851] range","detail":"{range_begin:/registry/validatingadmissionpolicies; range_end:; response_count:0; response_revision:1857; }","duration":"275.339196ms","start":"2025-10-25T09:16:45.575359Z","end":"2025-10-25T09:16:45.850698Z","steps":["trace[442700851] 'agreement among raft nodes before linearized reading'  (duration: 275.2266ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:16:45.851106Z","caller":"traceutil/trace.go:172","msg":"trace[1808423711] transaction","detail":"{read_only:false; response_revision:1858; number_of_response:1; }","duration":"323.702873ms","start":"2025-10-25T09:16:45.527396Z","end":"2025-10-25T09:16:45.851099Z","steps":["trace[1808423711] 'process raft request'  (duration: 323.631783ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:16:45.851209Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:16:45.527375Z","time spent":"323.754214ms","remote":"127.0.0.1:60750","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":574,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/ipaddresses/10.107.23.145\" mod_revision:0 > success:<request_put:<key:\"/registry/ipaddresses/10.107.23.145\" value_size:531 >> failure:<>"}
	{"level":"warn","ts":"2025-10-25T09:16:45.856810Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.741339ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-10-25T09:16:45.856891Z","caller":"traceutil/trace.go:172","msg":"trace[1712698124] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1858; }","duration":"233.870943ms","start":"2025-10-25T09:16:45.623007Z","end":"2025-10-25T09:16:45.856878Z","steps":["trace[1712698124] 'agreement among raft nodes before linearized reading'  (duration: 233.609616ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:16:45.857153Z","caller":"traceutil/trace.go:172","msg":"trace[1797242787] transaction","detail":"{read_only:false; response_revision:1859; number_of_response:1; }","duration":"315.595435ms","start":"2025-10-25T09:16:45.541545Z","end":"2025-10-25T09:16:45.857141Z","steps":["trace[1797242787] 'process raft request'  (duration: 315.25403ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:16:45.858032Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:16:45.541522Z","time spent":"315.721228ms","remote":"127.0.0.1:57286","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2695,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/nginx\" mod_revision:1856 > success:<request_put:<key:\"/registry/pods/default/nginx\" value_size:2659 >> failure:<request_range:<key:\"/registry/pods/default/nginx\" > >"}
	
	
	==> kernel <==
	 09:22:00 up 9 min,  0 users,  load average: 0.09, 0.80, 0.71
	Linux addons-442185 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [68e60120ef91] <==
	W1025 09:16:12.489182       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1025 09:16:12.514230       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1025 09:16:12.548330       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	I1025 09:16:12.715961       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1025 09:16:13.716680       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1025 09:16:13.823793       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1025 09:16:29.433806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.30:8443->192.168.39.1:52878: use of closed network connection
	E1025 09:16:29.640608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.30:8443->192.168.39.1:52902: use of closed network connection
	I1025 09:16:39.181225       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.179.190"}
	I1025 09:16:45.282049       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 09:16:45.859450       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.23.145"}
	I1025 09:16:46.128848       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1025 09:16:57.298552       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.197.252"}
	I1025 09:17:24.573723       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1025 09:17:43.521513       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:17:43.521688       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 09:17:43.587240       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:17:43.587562       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 09:17:43.645064       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:17:43.645646       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 09:17:43.710768       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:17:43.710848       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1025 09:17:44.588521       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1025 09:17:44.704205       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1025 09:17:44.757406       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [77c728f9e551] <==
	E1025 09:21:14.084517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:18.283345       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:18.284724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:21.924296       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:21.925578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:26.026084       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:26.027647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:28.425275       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:28.426664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:29.775148       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:29.776812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:32.778599       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:32.780261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:33.703118       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:33.704134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:35.311733       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:35.313041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:37.418408       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:37.420425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:49.638020       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:49.639401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:52.047940       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:52.049445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:21:58.535757       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:21:58.537104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [2226161dac80] <==
	I1025 09:12:55.207997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:12:55.309394       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:12:55.309443       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.30"]
	E1025 09:12:55.309525       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:12:55.483709       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1025 09:12:55.483806       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 09:12:55.483845       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:12:55.515643       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:12:55.517366       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:12:55.517523       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:12:55.539674       1 config.go:200] "Starting service config controller"
	I1025 09:12:55.539704       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:12:55.539726       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:12:55.539729       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:12:55.539738       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:12:55.539742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:12:55.540964       1 config.go:309] "Starting node config controller"
	I1025 09:12:55.540990       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:12:55.668395       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:12:55.675322       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:12:55.675404       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:12:55.675730       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [2599c95df6e6] <==
	E1025 09:12:45.799021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:12:45.799130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:12:45.799457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:12:45.799684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:12:45.800129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:12:45.800316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:12:45.799873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:12:45.799966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:12:45.800018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:12:45.800077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:12:45.799818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:12:45.799688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:12:46.645967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:12:46.872943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:12:46.875215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:12:46.880698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:12:46.924101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:12:46.928774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:12:46.959187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:12:46.972200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:12:46.980373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:12:47.016383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:12:47.056285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 09:12:47.064966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1025 09:12:49.388581       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:21:07 addons-442185 kubelet[2483]: E1025 09:21:07.684867    2483 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d" podUID="8a65d104-69bd-4308-ba40-ddc809559ae6"
	Oct 25 09:21:15 addons-442185 kubelet[2483]: E1025 09:21:15.683670    2483 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-f697f" podUID="a0d8a23e-1c3a-45bf-be1b-d186a2ce0f8d"
	Oct 25 09:21:15 addons-442185 kubelet[2483]: I1025 09:21:15.813357    2483 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/8a65d104-69bd-4308-ba40-ddc809559ae6-script\") pod \"8a65d104-69bd-4308-ba40-ddc809559ae6\" (UID: \"8a65d104-69bd-4308-ba40-ddc809559ae6\") "
	Oct 25 09:21:15 addons-442185 kubelet[2483]: I1025 09:21:15.813409    2483 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/8a65d104-69bd-4308-ba40-ddc809559ae6-data\") pod \"8a65d104-69bd-4308-ba40-ddc809559ae6\" (UID: \"8a65d104-69bd-4308-ba40-ddc809559ae6\") "
	Oct 25 09:21:15 addons-442185 kubelet[2483]: I1025 09:21:15.813439    2483 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vzhn\" (UniqueName: \"kubernetes.io/projected/8a65d104-69bd-4308-ba40-ddc809559ae6-kube-api-access-2vzhn\") pod \"8a65d104-69bd-4308-ba40-ddc809559ae6\" (UID: \"8a65d104-69bd-4308-ba40-ddc809559ae6\") "
	Oct 25 09:21:15 addons-442185 kubelet[2483]: I1025 09:21:15.813758    2483 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a65d104-69bd-4308-ba40-ddc809559ae6-data" (OuterVolumeSpecName: "data") pod "8a65d104-69bd-4308-ba40-ddc809559ae6" (UID: "8a65d104-69bd-4308-ba40-ddc809559ae6"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 09:21:15 addons-442185 kubelet[2483]: I1025 09:21:15.814347    2483 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a65d104-69bd-4308-ba40-ddc809559ae6-script" (OuterVolumeSpecName: "script") pod "8a65d104-69bd-4308-ba40-ddc809559ae6" (UID: "8a65d104-69bd-4308-ba40-ddc809559ae6"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 25 09:21:15 addons-442185 kubelet[2483]: I1025 09:21:15.816447    2483 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a65d104-69bd-4308-ba40-ddc809559ae6-kube-api-access-2vzhn" (OuterVolumeSpecName: "kube-api-access-2vzhn") pod "8a65d104-69bd-4308-ba40-ddc809559ae6" (UID: "8a65d104-69bd-4308-ba40-ddc809559ae6"). InnerVolumeSpecName "kube-api-access-2vzhn". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 25 09:21:15 addons-442185 kubelet[2483]: I1025 09:21:15.913992    2483 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/8a65d104-69bd-4308-ba40-ddc809559ae6-script\") on node \"addons-442185\" DevicePath \"\""
	Oct 25 09:21:15 addons-442185 kubelet[2483]: I1025 09:21:15.914062    2483 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/8a65d104-69bd-4308-ba40-ddc809559ae6-data\") on node \"addons-442185\" DevicePath \"\""
	Oct 25 09:21:15 addons-442185 kubelet[2483]: I1025 09:21:15.914075    2483 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2vzhn\" (UniqueName: \"kubernetes.io/projected/8a65d104-69bd-4308-ba40-ddc809559ae6-kube-api-access-2vzhn\") on node \"addons-442185\" DevicePath \"\""
	Oct 25 09:21:16 addons-442185 kubelet[2483]: I1025 09:21:16.687564    2483 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a65d104-69bd-4308-ba40-ddc809559ae6" path="/var/lib/kubelet/pods/8a65d104-69bd-4308-ba40-ddc809559ae6/volumes"
	Oct 25 09:21:25 addons-442185 kubelet[2483]: I1025 09:21:25.680691    2483 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:21:30 addons-442185 kubelet[2483]: E1025 09:21:30.686232    2483 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-f697f" podUID="a0d8a23e-1c3a-45bf-be1b-d186a2ce0f8d"
	Oct 25 09:21:42 addons-442185 kubelet[2483]: E1025 09:21:42.682831    2483 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-f697f" podUID="a0d8a23e-1c3a-45bf-be1b-d186a2ce0f8d"
	Oct 25 09:21:45 addons-442185 kubelet[2483]: I1025 09:21:45.624509    2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2681492a-b44b-4469-a37f-339df1d9be68-data\") pod \"helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d\" (UID: \"2681492a-b44b-4469-a37f-339df1d9be68\") " pod="local-path-storage/helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d"
	Oct 25 09:21:45 addons-442185 kubelet[2483]: I1025 09:21:45.624577    2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2681492a-b44b-4469-a37f-339df1d9be68-script\") pod \"helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d\" (UID: \"2681492a-b44b-4469-a37f-339df1d9be68\") " pod="local-path-storage/helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d"
	Oct 25 09:21:45 addons-442185 kubelet[2483]: I1025 09:21:45.624607    2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5jqc\" (UniqueName: \"kubernetes.io/projected/2681492a-b44b-4469-a37f-339df1d9be68-kube-api-access-x5jqc\") pod \"helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d\" (UID: \"2681492a-b44b-4469-a37f-339df1d9be68\") " pod="local-path-storage/helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d"
	Oct 25 09:21:47 addons-442185 kubelet[2483]: E1025 09:21:47.080663    2483 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 25 09:21:47 addons-442185 kubelet[2483]: E1025 09:21:47.080761    2483 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 25 09:21:47 addons-442185 kubelet[2483]: E1025 09:21:47.080847    2483 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d_local-path-storage(2681492a-b44b-4469-a37f-339df1d9be68): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 25 09:21:47 addons-442185 kubelet[2483]: E1025 09:21:47.080883    2483 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d" podUID="2681492a-b44b-4469-a37f-339df1d9be68"
	Oct 25 09:21:47 addons-442185 kubelet[2483]: E1025 09:21:47.391058    2483 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d" podUID="2681492a-b44b-4469-a37f-339df1d9be68"
	Oct 25 09:21:53 addons-442185 kubelet[2483]: E1025 09:21:53.682769    2483 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-f697f" podUID="a0d8a23e-1c3a-45bf-be1b-d186a2ce0f8d"
	Oct 25 09:21:58 addons-442185 kubelet[2483]: I1025 09:21:58.681083    2483 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-b27h4" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [f0e1d4c04b89] <==
	W1025 09:21:35.062117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:37.066064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:37.072028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:39.077401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:39.083000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:41.088086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:41.094037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:43.098447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:43.105781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:45.109505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:45.115596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:47.119371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:47.124784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:49.128649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:49.136725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:51.140225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:51.145363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:53.149102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:53.157564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:55.160686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:55.167446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:57.172226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:57.177419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:59.183642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:21:59.190680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-442185 -n addons-442185
helpers_test.go:269: (dbg) Run:  kubectl --context addons-442185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-f697f test-local-path helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-442185 describe pod hello-world-app-5d498dc89-f697f test-local-path helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-442185 describe pod hello-world-app-5d498dc89-f697f test-local-path helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d: exit status 1 (76.031143ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-f697f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-442185/192.168.39.30
	Start Time:       Sat, 25 Oct 2025 09:16:57 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.35
	IPs:
	  IP:           10.244.0.35
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zcnh4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zcnh4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason          Age                   From               Message
	  ----     ------          ----                  ----               -------
	  Normal   Scheduled       5m3s                  default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-f697f to addons-442185
	  Normal   SandboxChanged  5m                    kubelet            Pod sandbox changed, it will be killed and re-created.
	  Warning  Failed          3m27s (x2 over 5m1s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling         115s (x5 over 5m2s)   kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed          114s (x5 over 5m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed          114s (x3 over 4m47s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed          88s (x15 over 5m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff         30s (x19 over 5m)     kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scz5p (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-scz5p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-442185 describe pod hello-world-app-5d498dc89-f697f test-local-path helper-pod-create-pvc-0147262d-97ff-4b73-9f09-75c63074e57d: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/LocalPath (302.10s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (301.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-447073 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-447073 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-447073 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-447073 --alsologtostderr -v=1] stderr:
I1025 09:26:50.350527  381028 out.go:360] Setting OutFile to fd 1 ...
I1025 09:26:50.350639  381028 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:26:50.350651  381028 out.go:374] Setting ErrFile to fd 2...
I1025 09:26:50.350657  381028 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:26:50.350827  381028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
I1025 09:26:50.351110  381028 mustload.go:65] Loading cluster: functional-447073
I1025 09:26:50.351549  381028 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:26:50.353767  381028 host.go:66] Checking if "functional-447073" exists ...
I1025 09:26:50.354028  381028 api_server.go:166] Checking apiserver status ...
I1025 09:26:50.354075  381028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 09:26:50.356921  381028 main.go:141] libmachine: domain functional-447073 has defined MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:26:50.357486  381028 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:71:c8", ip: ""} in network mk-functional-447073: {Iface:virbr1 ExpiryTime:2025-10-25 10:23:33 +0000 UTC Type:0 Mac:52:54:00:28:71:c8 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-447073 Clientid:01:52:54:00:28:71:c8}
I1025 09:26:50.357522  381028 main.go:141] libmachine: domain functional-447073 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:26:50.357766  381028 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/functional-447073/id_rsa Username:docker}
I1025 09:26:50.471067  381028 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9332/cgroup
W1025 09:26:50.500710  381028 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/9332/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1025 09:26:50.500779  381028 ssh_runner.go:195] Run: ls
I1025 09:26:50.510472  381028 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8441/healthz ...
I1025 09:26:50.527776  381028 api_server.go:279] https://192.168.39.191:8441/healthz returned 200:
ok
W1025 09:26:50.527839  381028 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1025 09:26:50.528072  381028 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:26:50.528098  381028 addons.go:69] Setting dashboard=true in profile "functional-447073"
I1025 09:26:50.528108  381028 addons.go:238] Setting addon dashboard=true in "functional-447073"
I1025 09:26:50.528170  381028 host.go:66] Checking if "functional-447073" exists ...
I1025 09:26:50.531591  381028 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1025 09:26:50.532745  381028 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1025 09:26:50.533770  381028 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1025 09:26:50.533784  381028 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1025 09:26:50.536403  381028 main.go:141] libmachine: domain functional-447073 has defined MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:26:50.536850  381028 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:71:c8", ip: ""} in network mk-functional-447073: {Iface:virbr1 ExpiryTime:2025-10-25 10:23:33 +0000 UTC Type:0 Mac:52:54:00:28:71:c8 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-447073 Clientid:01:52:54:00:28:71:c8}
I1025 09:26:50.536872  381028 main.go:141] libmachine: domain functional-447073 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:26:50.537055  381028 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/functional-447073/id_rsa Username:docker}
I1025 09:26:50.680090  381028 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1025 09:26:50.680126  381028 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1025 09:26:50.731215  381028 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1025 09:26:50.731274  381028 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1025 09:26:50.790718  381028 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1025 09:26:50.790746  381028 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1025 09:26:50.829940  381028 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1025 09:26:50.829963  381028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1025 09:26:50.855415  381028 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1025 09:26:50.855447  381028 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1025 09:26:50.882865  381028 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1025 09:26:50.882895  381028 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1025 09:26:50.927782  381028 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1025 09:26:50.927811  381028 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1025 09:26:50.960244  381028 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1025 09:26:50.960278  381028 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1025 09:26:50.987442  381028 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1025 09:26:50.987471  381028 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1025 09:26:51.038405  381028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1025 09:26:52.076497  381028 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.0380215s)
I1025 09:26:52.078270  381028 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-447073 addons enable metrics-server

                                                
                                                
I1025 09:26:52.079466  381028 addons.go:201] Writing out "functional-447073" config to set dashboard=true...
W1025 09:26:52.079727  381028 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1025 09:26:52.080442  381028 kapi.go:59] client config for functional-447073: &rest.Config{Host:"https://192.168.39.191:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.key", CAFile:"/home/jenkins/minikube-integration/21767-367343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1025 09:26:52.080935  381028 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1025 09:26:52.080950  381028 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1025 09:26:52.080955  381028 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1025 09:26:52.080959  381028 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1025 09:26:52.080965  381028 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1025 09:26:52.089406  381028 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  6a97ae70-ad0a-4a6d-bc9f-90f067b548ef 765 0 2025-10-25 09:26:52 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-25 09:26:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.99.127.34,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.99.127.34],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1025 09:26:52.089607  381028 out.go:285] * Launching proxy ...
* Launching proxy ...
I1025 09:26:52.089674  381028 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-447073 proxy --port 36195]
I1025 09:26:52.090081  381028 dashboard.go:157] Waiting for kubectl to output host:port ...
I1025 09:26:52.136183  381028 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1025 09:26:52.136243  381028 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1025 09:26:52.154246  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[60cd26e6-9c38-460d-861d-e0904f31dd64] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc0008fd740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0c280 TLS:<nil>}
I1025 09:26:52.154368  381028 retry.go:31] will retry after 128.026µs: Temporary Error: unexpected response code: 503
I1025 09:26:52.159864  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[14bde31c-9919-4d60-a229-10a6e41818fc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc0008fdb80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2a00 TLS:<nil>}
I1025 09:26:52.159928  381028 retry.go:31] will retry after 94.815µs: Temporary Error: unexpected response code: 503
I1025 09:26:52.163892  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[312cbaa2-558a-45ec-aa09-617b5c30c8bc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc0016bc940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2b40 TLS:<nil>}
I1025 09:26:52.163970  381028 retry.go:31] will retry after 180.344µs: Temporary Error: unexpected response code: 503
I1025 09:26:52.167918  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[905da0c5-3ad1-4879-93ea-b9060e3e9354] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc0008fde80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008963c0 TLS:<nil>}
I1025 09:26:52.167973  381028 retry.go:31] will retry after 315.711µs: Temporary Error: unexpected response code: 503
I1025 09:26:52.172993  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fad61103-a2a4-44ae-9be6-c26a90ea4db0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc0016bca00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2c80 TLS:<nil>}
I1025 09:26:52.173040  381028 retry.go:31] will retry after 546.07µs: Temporary Error: unexpected response code: 503
I1025 09:26:52.176527  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4eb643bd-3a20-480f-8308-b19627f8cbe5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc00173a0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000896500 TLS:<nil>}
I1025 09:26:52.176584  381028 retry.go:31] will retry after 885.559µs: Temporary Error: unexpected response code: 503
I1025 09:26:52.179723  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[19062fff-cea8-4031-aa2f-e05d93966494] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc00173a180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2dc0 TLS:<nil>}
I1025 09:26:52.179784  381028 retry.go:31] will retry after 912.959µs: Temporary Error: unexpected response code: 503
I1025 09:26:52.183216  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4dda0320-776f-48a4-9472-6cfab2d448e9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc000a05c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2f00 TLS:<nil>}
I1025 09:26:52.183272  381028 retry.go:31] will retry after 1.987818ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.188298  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[925fc504-ab20-4c2d-b18c-1daf73be3144] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc000a05e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0c3c0 TLS:<nil>}
I1025 09:26:52.188346  381028 retry.go:31] will retry after 3.843618ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.194851  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0701b0f9-0395-4c1f-843c-e4ba7f56c487] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc00173a280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0c500 TLS:<nil>}
I1025 09:26:52.194902  381028 retry.go:31] will retry after 2.289623ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.200049  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[955e21cd-52d1-485e-a41b-48484580f550] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc0016bcb80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d32c0 TLS:<nil>}
I1025 09:26:52.200103  381028 retry.go:31] will retry after 8.179114ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.211779  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b8a00912-dbc9-4de4-8401-16fc4874999c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc00173a380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000896640 TLS:<nil>}
I1025 09:26:52.211818  381028 retry.go:31] will retry after 5.301198ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.220560  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ddc2614-d622-4a10-ad39-14c3369ee9ec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc0016bcc80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3400 TLS:<nil>}
I1025 09:26:52.220614  381028 retry.go:31] will retry after 14.354366ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.238995  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f47d05a-69e8-4fff-83b8-326500893e64] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc0016bcd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000896780 TLS:<nil>}
I1025 09:26:52.239058  381028 retry.go:31] will retry after 15.28979ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.258497  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b4c289b7-42df-4180-b2a1-6b1ea37c6f93] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc00163c080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008968c0 TLS:<nil>}
I1025 09:26:52.258563  381028 retry.go:31] will retry after 31.948769ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.294762  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cfe4d405-e88c-4f7c-9d27-90af1b2281cf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc00173a440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0c640 TLS:<nil>}
I1025 09:26:52.294831  381028 retry.go:31] will retry after 22.521442ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.322806  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d05a8641-11d7-4f3e-adcb-adc13e8407d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc0016bce80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3540 TLS:<nil>}
I1025 09:26:52.322871  381028 retry.go:31] will retry after 69.631195ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.396104  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a010067-93f2-4714-8943-3ccc9178960e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc00163c180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000896a00 TLS:<nil>}
I1025 09:26:52.396200  381028 retry.go:31] will retry after 83.80127ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.485133  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8740c6e6-15ec-4c09-ab34-70912599941a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc00173a540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0ca00 TLS:<nil>}
I1025 09:26:52.485235  381028 retry.go:31] will retry after 133.621762ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.623084  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[89f5c805-8b9a-4215-b3a1-8e232baff865] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc00163c280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3680 TLS:<nil>}
I1025 09:26:52.623172  381028 retry.go:31] will retry after 231.266833ms: Temporary Error: unexpected response code: 503
I1025 09:26:52.858006  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d4ce14e-97d2-4bb2-826a-c49c235fd467] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:52 GMT]] Body:0xc0016bcfc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0cb40 TLS:<nil>}
I1025 09:26:52.858085  381028 retry.go:31] will retry after 251.431582ms: Temporary Error: unexpected response code: 503
I1025 09:26:53.116857  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6ba2b0b5-3d65-4c73-a4f1-cda2cd08e171] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:53 GMT]] Body:0xc00173a600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000896c80 TLS:<nil>}
I1025 09:26:53.116954  381028 retry.go:31] will retry after 444.761335ms: Temporary Error: unexpected response code: 503
I1025 09:26:53.568153  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7ea31a52-9b93-4b20-b23d-ae136fbba16d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:53 GMT]] Body:0xc0016bd100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008972c0 TLS:<nil>}
I1025 09:26:53.568250  381028 retry.go:31] will retry after 492.543944ms: Temporary Error: unexpected response code: 503
I1025 09:26:54.065775  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8e0bde21-7799-45be-8833-a7f9739e0848] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:54 GMT]] Body:0xc00173a700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000897400 TLS:<nil>}
I1025 09:26:54.065847  381028 retry.go:31] will retry after 677.33781ms: Temporary Error: unexpected response code: 503
I1025 09:26:54.747030  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b988fe35-b837-41b9-8745-55ea6268118f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:54 GMT]] Body:0xc00163c380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3a40 TLS:<nil>}
I1025 09:26:54.747109  381028 retry.go:31] will retry after 1.792880528s: Temporary Error: unexpected response code: 503
I1025 09:26:56.544037  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0488de4b-55c1-4ca5-acb8-1b3ca3657cc1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:56 GMT]] Body:0xc00163c440 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0cc80 TLS:<nil>}
I1025 09:26:56.544111  381028 retry.go:31] will retry after 2.555805982s: Temporary Error: unexpected response code: 503
I1025 09:26:59.106147  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff70ecb4-4b56-4c34-a561-ed60a1019855] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:26:59 GMT]] Body:0xc00173a800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0cdc0 TLS:<nil>}
I1025 09:26:59.106226  381028 retry.go:31] will retry after 4.026877371s: Temporary Error: unexpected response code: 503
I1025 09:27:03.152413  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[257ac9fc-7b59-4efa-82d8-d12b681d4859] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:27:03 GMT]] Body:0xc0016bd240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0cf00 TLS:<nil>}
I1025 09:27:03.152515  381028 retry.go:31] will retry after 5.393933259s: Temporary Error: unexpected response code: 503
I1025 09:27:08.552283  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49ced155-9594-4a76-88be-99cf8be8d131] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:27:08 GMT]] Body:0xc00173a880 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000897540 TLS:<nil>}
I1025 09:27:08.552356  381028 retry.go:31] will retry after 11.107841865s: Temporary Error: unexpected response code: 503
I1025 09:27:19.668016  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a740246-52aa-443a-b391-dec096f318a3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:27:19 GMT]] Body:0xc0016bd340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3b80 TLS:<nil>}
I1025 09:27:19.668099  381028 retry.go:31] will retry after 19.0023215s: Temporary Error: unexpected response code: 503
I1025 09:27:38.676950  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49fd55d1-7160-4e3e-8df1-fa8f4b6adde1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:27:38 GMT]] Body:0xc0008a2580 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000897680 TLS:<nil>}
I1025 09:27:38.677027  381028 retry.go:31] will retry after 12.980196884s: Temporary Error: unexpected response code: 503
I1025 09:27:51.661646  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd410947-a0e9-4ea6-a7b4-b28b6da6afad] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:27:51 GMT]] Body:0xc0016bd480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3e00 TLS:<nil>}
I1025 09:27:51.661746  381028 retry.go:31] will retry after 21.95305777s: Temporary Error: unexpected response code: 503
I1025 09:28:13.620018  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c4ceef7-ba0b-4f5b-ad10-baff8f57d0db] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:28:13 GMT]] Body:0xc0008a2640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008977c0 TLS:<nil>}
I1025 09:28:13.620102  381028 retry.go:31] will retry after 27.798576355s: Temporary Error: unexpected response code: 503
I1025 09:28:41.422983  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a3a85353-bf5f-46d8-a272-cd7033958172] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:28:41 GMT]] Body:0xc0008a2700 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00015a140 TLS:<nil>}
I1025 09:28:41.423070  381028 retry.go:31] will retry after 35.177912954s: Temporary Error: unexpected response code: 503
I1025 09:29:16.605391  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0cba962b-b596-4284-9ff4-e0296a823c82] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:29:16 GMT]] Body:0xc0016bc040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0c000 TLS:<nil>}
I1025 09:29:16.605475  381028 retry.go:31] will retry after 52.027636187s: Temporary Error: unexpected response code: 503
I1025 09:30:08.639491  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e8139967-9d52-4197-9ebe-cc3c78662301] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:30:08 GMT]] Body:0xc0016bc0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a0c140 TLS:<nil>}
I1025 09:30:08.639588  381028 retry.go:31] will retry after 41.438709848s: Temporary Error: unexpected response code: 503
I1025 09:30:50.086227  381028 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[06786a30-275f-40f9-866e-765d4bcf2f5d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:30:50 GMT]] Body:0xc0008a2040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000896140 TLS:<nil>}
I1025 09:30:50.086345  381028 retry.go:31] will retry after 1m6.111767941s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-447073 -n functional-447073
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service        │ functional-447073 service hello-node --url                                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh sudo umount -f /mount-9p                                                                     │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ ssh            │ functional-447073 ssh echo hello                                                                                   │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh cat /etc/hostname                                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ mount          │ -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount2 --alsologtostderr -v=1 │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ ssh            │ functional-447073 ssh findmnt -T /mount1                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ mount          │ -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount1 --alsologtostderr -v=1 │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ ssh            │ functional-447073 ssh sudo cat /etc/test/nested/copy/371331/hosts                                                  │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ addons         │ functional-447073 addons list                                                                                      │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ addons         │ functional-447073 addons list -o json                                                                              │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh findmnt -T /mount1                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh findmnt -T /mount2                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh findmnt -T /mount3                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ mount          │ -p functional-447073 --kill=true                                                                                   │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ service        │ functional-447073 service hello-node-connect --url                                                                 │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format short --alsologtostderr                                                        │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format json --alsologtostderr                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format table --alsologtostderr                                                        │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format yaml --alsologtostderr                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ ssh            │ functional-447073 ssh pgrep buildkitd                                                                              │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ image          │ functional-447073 image build -t localhost/my-image:functional-447073 testdata/build --alsologtostderr             │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls                                                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ update-context │ functional-447073 update-context --alsologtostderr -v=2                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ update-context │ functional-447073 update-context --alsologtostderr -v=2                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ update-context │ functional-447073 update-context --alsologtostderr -v=2                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:26:50
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:26:50.220293  381001 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:26:50.220471  381001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:26:50.220487  381001 out.go:374] Setting ErrFile to fd 2...
	I1025 09:26:50.220494  381001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:26:50.220916  381001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 09:26:50.221572  381001 out.go:368] Setting JSON to false
	I1025 09:26:50.222863  381001 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4152,"bootTime":1761380258,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:26:50.223001  381001 start.go:141] virtualization: kvm guest
	I1025 09:26:50.224795  381001 out.go:179] * [functional-447073] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:26:50.226216  381001 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:26:50.226244  381001 notify.go:220] Checking for updates...
	I1025 09:26:50.228265  381001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:26:50.229474  381001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 09:26:50.230673  381001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 09:26:50.231736  381001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:26:50.232879  381001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:26:50.234299  381001 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:26:50.234772  381001 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:26:50.271376  381001 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1025 09:26:50.272952  381001 start.go:305] selected driver: kvm2
	I1025 09:26:50.272970  381001 start.go:925] validating driver "kvm2" against &{Name:functional-447073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-447073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:26:50.273073  381001 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:26:50.275002  381001 out.go:203] 
	W1025 09:26:50.276433  381001 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:26:50.278578  381001 out.go:203] 
	
	
	==> Docker <==
	Oct 25 09:27:07 functional-447073 dockerd[6728]: time="2025-10-25T09:27:07.837057755Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:08 functional-447073 dockerd[6728]: time="2025-10-25T09:27:08.347646483Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:27:08 functional-447073 dockerd[6728]: time="2025-10-25T09:27:08.831031140Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:16 functional-447073 dockerd[6728]: time="2025-10-25T09:27:16.148902040Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:22 functional-447073 dockerd[6728]: time="2025-10-25T09:27:22.111695653Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:35 functional-447073 dockerd[6728]: time="2025-10-25T09:27:35.356704131Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:27:35 functional-447073 dockerd[6728]: time="2025-10-25T09:27:35.838179234Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:37 functional-447073 dockerd[6728]: time="2025-10-25T09:27:37.355435743Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:27:37 functional-447073 dockerd[6728]: time="2025-10-25T09:27:37.841073583Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:43 functional-447073 dockerd[6728]: time="2025-10-25T09:27:43.378185569Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:43 functional-447073 cri-dockerd[7606]: time="2025-10-25T09:27:43Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Oct 25 09:27:52 functional-447073 dockerd[6728]: time="2025-10-25T09:27:52.113030399Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:24 functional-447073 dockerd[6728]: time="2025-10-25T09:28:24.349728013Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:28:24 functional-447073 dockerd[6728]: time="2025-10-25T09:28:24.833330000Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:32 functional-447073 dockerd[6728]: time="2025-10-25T09:28:32.345056635Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:28:32 functional-447073 dockerd[6728]: time="2025-10-25T09:28:32.831404480Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:36 functional-447073 dockerd[6728]: time="2025-10-25T09:28:36.101386579Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:43 functional-447073 dockerd[6728]: time="2025-10-25T09:28:43.094488874Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:29:56 functional-447073 dockerd[6728]: time="2025-10-25T09:29:56.350076943Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:29:56 functional-447073 dockerd[6728]: time="2025-10-25T09:29:56.837007356Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:30:02 functional-447073 dockerd[6728]: time="2025-10-25T09:30:02.346878112Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:30:03 functional-447073 dockerd[6728]: time="2025-10-25T09:30:03.127942932Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:30:03 functional-447073 cri-dockerd[7606]: time="2025-10-25T09:30:03Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Oct 25 09:30:08 functional-447073 dockerd[6728]: time="2025-10-25T09:30:08.132928928Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:30:16 functional-447073 dockerd[6728]: time="2025-10-25T09:30:16.136595644Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bec3e541755dd       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           4 minutes ago       Running             echo-server               0                   b670aab77940a       hello-node-connect-7d85dfc575-55ktk
	ef24ad09815d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago       Exited              mount-munger              0                   f8a0438a40181       busybox-mount
	410b2328e566f       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   22e650fd2b6a5       hello-node-75c85bcc94-lfbb2
	905d241ed9acf       52546a367cc9e                                                                                         5 minutes ago       Running             coredns                   2                   03a4522316094       coredns-66bc5c9577-gs9ll
	ca7033bc221e8       fc25172553d79                                                                                         5 minutes ago       Running             kube-proxy                3                   9d0b19428b8a9       kube-proxy-dn86t
	ecca13c086d2d       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       4                   9010440b6f402       storage-provisioner
	db59fa10501d3       c3994bc696102                                                                                         5 minutes ago       Running             kube-apiserver            0                   423851c495f20       kube-apiserver-functional-447073
	20ee75adca2d4       c80c8dbafe7dd                                                                                         5 minutes ago       Running             kube-controller-manager   3                   4d192cdbeb7b7       kube-controller-manager-functional-447073
	e2de07c1e9692       7dd6aaa1717ab                                                                                         5 minutes ago       Running             kube-scheduler            3                   f05d7ba906312       kube-scheduler-functional-447073
	4d8d2f350016d       5f1f5298c888d                                                                                         5 minutes ago       Running             etcd                      2                   8df2a89e1da3c       etcd-functional-447073
	c528e6a20a051       7dd6aaa1717ab                                                                                         5 minutes ago       Exited              kube-scheduler            2                   b19de97ddd2c0       kube-scheduler-functional-447073
	e34b4f4904825       c80c8dbafe7dd                                                                                         5 minutes ago       Exited              kube-controller-manager   2                   6ce6473a977f6       kube-controller-manager-functional-447073
	87f021b308baf       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       3                   2119e1d5b5e1b       storage-provisioner
	5cc934531c377       fc25172553d79                                                                                         5 minutes ago       Exited              kube-proxy                2                   b64ce78b38364       kube-proxy-dn86t
	abebe2c25d10b       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   3fdc0f4fcd22b       coredns-66bc5c9577-gs9ll
	9aae27d2a9866       5f1f5298c888d                                                                                         6 minutes ago       Exited              etcd                      1                   88c235d505b1d       etcd-functional-447073
	
	
	==> coredns [905d241ed9ac] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48904 - 16127 "HINFO IN 4123962544261759008.795220563287131680. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.071803834s
	
	
	==> coredns [abebe2c25d10] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51408 - 37004 "HINFO IN 8190550577693892366.537718483269858856. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.072993556s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-447073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-447073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=functional-447073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_23_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:23:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-447073
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:31:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:31:10 +0000   Sat, 25 Oct 2025 09:23:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:31:10 +0000   Sat, 25 Oct 2025 09:23:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:31:10 +0000   Sat, 25 Oct 2025 09:23:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:31:10 +0000   Sat, 25 Oct 2025 09:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    functional-447073
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b2e995ea732422c9b8a28d66c0cc0f7
	  System UUID:                0b2e995e-a732-422c-9b8a-28d66c0cc0f7
	  Boot ID:                    11080a84-7c92-4095-a589-4b4722f040a1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-lfbb2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  default                     hello-node-connect-7d85dfc575-55ktk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  default                     mysql-5bb876957f-dtkfb                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    4m53s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 coredns-66bc5c9577-gs9ll                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m47s
	  kube-system                 etcd-functional-447073                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m53s
	  kube-system                 kube-apiserver-functional-447073              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-functional-447073     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 kube-proxy-dn86t                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 kube-scheduler-functional-447073              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4wkrm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2k76x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m24s                  kube-proxy       
	  Normal   Starting                 6m25s                  kube-proxy       
	  Normal   Starting                 7m44s                  kube-proxy       
	  Normal   Starting                 7m53s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  7m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m53s                  kubelet          Node functional-447073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m53s                  kubelet          Node functional-447073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m53s                  kubelet          Node functional-447073 status is now: NodeHasSufficientPID
	  Normal   NodeReady                7m51s                  kubelet          Node functional-447073 status is now: NodeReady
	  Normal   RegisteredNode           7m49s                  node-controller  Node functional-447073 event: Registered Node functional-447073 in Controller
	  Warning  ContainerGCFailed        6m53s                  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m31s (x8 over 6m31s)  kubelet          Node functional-447073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m31s (x8 over 6m31s)  kubelet          Node functional-447073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m31s (x7 over 6m31s)  kubelet          Node functional-447073 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m24s                  node-controller  Node functional-447073 event: Registered Node functional-447073 in Controller
	  Normal   Starting                 5m30s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node functional-447073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node functional-447073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m30s (x7 over 5m30s)  kubelet          Node functional-447073 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           5m23s                  node-controller  Node functional-447073 event: Registered Node functional-447073 in Controller
	
	
	==> dmesg <==
	[  +1.174406] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.115483] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.116208] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.099021] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.143359] kauditd_printk_skb: 165 callbacks suppressed
	[Oct25 09:24] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.912852] kauditd_printk_skb: 214 callbacks suppressed
	[  +0.169938] kauditd_printk_skb: 12 callbacks suppressed
	[Oct25 09:25] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.469066] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.106289] kauditd_printk_skb: 218 callbacks suppressed
	[  +0.323254] kauditd_printk_skb: 234 callbacks suppressed
	[  +4.469445] kauditd_printk_skb: 17 callbacks suppressed
	[Oct25 09:26] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.497036] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.121277] kauditd_printk_skb: 466 callbacks suppressed
	[  +0.233285] kauditd_printk_skb: 174 callbacks suppressed
	[ +16.977484] kauditd_printk_skb: 17 callbacks suppressed
	[  +1.222076] kauditd_printk_skb: 133 callbacks suppressed
	[  +1.004494] kauditd_printk_skb: 172 callbacks suppressed
	[Oct25 09:27] kauditd_printk_skb: 80 callbacks suppressed
	[  +3.895106] crun[12285]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.000042] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [4d8d2f350016] <==
	{"level":"warn","ts":"2025-10-25T09:26:23.679015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.702315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.709649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.718819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.737354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.758875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.767706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.779301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.791931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.804457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.813676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.823099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.841377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.844791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.858190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.867826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.878826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.902067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.902334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.918936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.927854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.938312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.951399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.974176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:24.021985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45934","server-name":"","error":"EOF"}
	
	
	==> etcd [9aae27d2a986] <==
	{"level":"warn","ts":"2025-10-25T09:25:23.452417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.464651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.483282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.494583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.503664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.519803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.601127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38398","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:26:03.628328Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T09:26:03.628431Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-447073","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	{"level":"error","ts":"2025-10-25T09:26:03.628598Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:26:10.634879Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:26:10.638351Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.638454Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f21a8e08563785d2","current-leader-member-id":"f21a8e08563785d2"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638558Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638597Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:26:10.638604Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.638623Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-25T09:26:10.638638Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638678Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638685Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:26:10.638691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.191:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.642995Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"error","ts":"2025-10-25T09:26:10.643056Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.191:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.643078Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2025-10-25T09:26:10.643085Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-447073","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	
	
	==> kernel <==
	 09:31:51 up 8 min,  0 users,  load average: 0.79, 0.49, 0.25
	Linux functional-447073 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [db59fa10501d] <==
	I1025 09:26:24.745586       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:26:24.745772       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 09:26:24.746362       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:26:24.748073       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:26:24.748276       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:26:24.748468       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:26:24.754999       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:26:24.757183       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:26:24.761234       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:26:25.139456       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:26:25.558807       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:26:26.299429       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:26:26.350412       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:26:26.381841       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:26:26.389277       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:26:28.111401       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:26:28.313975       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:26:43.543969       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.57.151"}
	I1025 09:26:47.768251       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:26:47.882681       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.253.63"}
	I1025 09:26:51.629404       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:26:52.044465       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.127.34"}
	I1025 09:26:52.063664       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.8.206"}
	I1025 09:26:58.373740       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.250.97"}
	I1025 09:26:59.226227       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.146.159"}
	
	
	==> kube-controller-manager [20ee75adca2d] <==
	I1025 09:26:28.032457       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:26:28.032467       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:26:28.035335       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:26:28.035429       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:26:28.035472       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:26:28.035490       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:26:28.035495       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:26:28.040154       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:26:28.040192       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:26:28.040210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:26:28.040226       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:26:28.042592       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:26:28.042682       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:26:28.042986       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-447073"
	I1025 09:26:28.043119       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:26:28.048498       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:26:28.051309       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:26:28.057190       1 shared_informer.go:356] "Caches are synced" controller="GC"
	E1025 09:26:51.769774       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.782545       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.795629       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.797656       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.811269       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.852314       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.883342       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e34b4f490482] <==
	I1025 09:26:17.106425       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-proxy [5cc934531c37] <==
	I1025 09:26:16.365383       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:26:16.435233       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1025 09:26:16.440557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-447073&limit=500&resourceVersion=0\": dial tcp 192.168.39.191:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [ca7033bc221e] <==
	I1025 09:26:26.131945       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:26:26.232977       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:26:26.233254       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.191"]
	E1025 09:26:26.233581       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:26:26.292159       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1025 09:26:26.293179       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 09:26:26.293951       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:26:26.307133       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:26:26.307837       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:26:26.307850       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:26:26.314195       1 config.go:200] "Starting service config controller"
	I1025 09:26:26.314226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:26:26.314243       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:26:26.314246       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:26:26.314260       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:26:26.314263       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:26:26.315395       1 config.go:309] "Starting node config controller"
	I1025 09:26:26.315422       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:26:26.315428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:26:26.414802       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:26:26.416068       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:26:26.416103       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c528e6a20a05] <==
	
	
	==> kube-scheduler [e2de07c1e969] <==
	I1025 09:26:23.340097       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:26:24.611670       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:26:24.611706       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:26:24.611715       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:26:24.611721       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:26:24.675112       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:26:24.675148       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:26:24.683687       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:26:24.684140       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:26:24.684491       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:26:24.684696       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:26:24.785191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:30:31 functional-447073 kubelet[9062]: E1025 09:30:31.112316    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:30:37 functional-447073 kubelet[9062]: E1025 09:30:37.115233    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:30:38 functional-447073 kubelet[9062]: E1025 09:30:38.107649    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:30:41 functional-447073 kubelet[9062]: E1025 09:30:41.116483    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:30:42 functional-447073 kubelet[9062]: E1025 09:30:42.106687    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:30:50 functional-447073 kubelet[9062]: E1025 09:30:50.105575    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:30:50 functional-447073 kubelet[9062]: E1025 09:30:50.106869    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:30:53 functional-447073 kubelet[9062]: E1025 09:30:53.104341    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:30:55 functional-447073 kubelet[9062]: E1025 09:30:55.105772    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:31:01 functional-447073 kubelet[9062]: E1025 09:31:01.106120    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:31:04 functional-447073 kubelet[9062]: E1025 09:31:04.104070    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:31:04 functional-447073 kubelet[9062]: E1025 09:31:04.108023    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:31:06 functional-447073 kubelet[9062]: E1025 09:31:06.106935    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:31:12 functional-447073 kubelet[9062]: E1025 09:31:12.106411    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:31:17 functional-447073 kubelet[9062]: E1025 09:31:17.103631    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:31:18 functional-447073 kubelet[9062]: E1025 09:31:18.105172    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:31:20 functional-447073 kubelet[9062]: E1025 09:31:20.106001    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:31:25 functional-447073 kubelet[9062]: E1025 09:31:25.106480    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:31:30 functional-447073 kubelet[9062]: E1025 09:31:30.104262    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:31:31 functional-447073 kubelet[9062]: E1025 09:31:31.108349    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:31:33 functional-447073 kubelet[9062]: E1025 09:31:33.107663    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:31:39 functional-447073 kubelet[9062]: E1025 09:31:39.107125    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:31:43 functional-447073 kubelet[9062]: E1025 09:31:43.110854    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:31:44 functional-447073 kubelet[9062]: E1025 09:31:44.106218    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:31:46 functional-447073 kubelet[9062]: E1025 09:31:46.105603    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	
	
	==> storage-provisioner [87f021b308ba] <==
	I1025 09:26:16.508377       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:26:16.513155       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [ecca13c086d2] <==
	W1025 09:31:26.623849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:28.627631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:28.636155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:30.640597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:30.645674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:32.648997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:32.654492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:34.658485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:34.664061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:36.667426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:36.672688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:38.676841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:38.686574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:40.691372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:40.696716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:42.700639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:42.705898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:44.709397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:44.714804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:46.718166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:46.726760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:48.730559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:48.736648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:50.740876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:31:50.746071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-447073 -n functional-447073
helpers_test.go:269: (dbg) Run:  kubectl --context functional-447073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-447073 describe pod busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-447073 describe pod busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x: exit status 1 (82.461499ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-447073/192.168.39.191
	Start Time:       Sat, 25 Oct 2025 09:26:50 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  docker://ef24ad09815d17db8560c2d6e888f75ad6694f3e9beecdee1e27c0a47ad08c06
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 25 Oct 2025 09:26:53 +0000
	      Finished:     Sat, 25 Oct 2025 09:26:53 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tvcbl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tvcbl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m1s   default-scheduler  Successfully assigned default/busybox-mount to functional-447073
	  Normal  Pulling    5m     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m58s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.123s (2.123s including waiting). Image size: 4403845 bytes.
	  Normal  Created    4m58s  kubelet            Created container: mount-munger
	  Normal  Started    4m58s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-dtkfb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-447073/192.168.39.191
	Start Time:       Sat, 25 Oct 2025 09:26:58 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r4wbt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r4wbt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m53s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-dtkfb to functional-447073
	  Warning  Failed     4m8s                  kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    104s (x5 over 4m52s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     103s (x4 over 4m51s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     103s (x5 over 4m51s)  kubelet            Error: ErrImagePull
	  Warning  Failed     50s (x15 over 4m50s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    12s (x18 over 4m50s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-447073/192.168.39.191
	Start Time:       Sat, 25 Oct 2025 09:27:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vpx5h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vpx5h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m47s                 default-scheduler  Successfully assigned default/sp-pod to functional-447073
	  Normal   Pulling    96s (x5 over 4m46s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     95s (x5 over 4m45s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     95s (x5 over 4m45s)   kubelet            Error: ErrImagePull
	  Warning  Failed     47s (x15 over 4m45s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    8s (x18 over 4m45s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4wkrm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-2k76x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-447073 describe pod busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (301.70s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (369.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [96f05010-86aa-453d-901d-cbe8e4213294] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003909188s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-447073 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-447073 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-447073 get pvc myclaim -o=json
I1025 09:27:03.158053  371331 retry.go:31] will retry after 1.13481313s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:768bcf93-8d4e-4674-9029-b4650ff44eb9 ResourceVersion:855 Generation:0 CreationTimestamp:2025-10-25 09:27:03 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b72130 VolumeMode:0xc001b72140 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-447073 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-447073 apply -f testdata/storage-provisioner/pod.yaml
I1025 09:27:04.482096  371331 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4d512914-9461-4c84-9831-d6966d601a40] Pending
helpers_test.go:352: "sp-pod" [4d512914-9461-4c84-9831-d6966d601a40] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-447073 -n functional-447073
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-25 09:33:04.723763653 +0000 UTC m=+1290.848382795
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-447073 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-447073 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-447073/192.168.39.191
Start Time:       Sat, 25 Oct 2025 09:27:04 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:  10.244.0.13
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vpx5h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-vpx5h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-447073
Normal   Pulling    2m49s (x5 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m48s (x5 over 5m58s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m48s (x5 over 5m58s)  kubelet            Error: ErrImagePull
Warning  Failed     57s (x20 over 5m58s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    46s (x21 over 5m58s)   kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-447073 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-447073 logs sp-pod -n default: exit status 1 (77.331729ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-447073 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-447073 -n functional-447073
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service        │ functional-447073 service hello-node --url                                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh sudo umount -f /mount-9p                                                                     │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ ssh            │ functional-447073 ssh echo hello                                                                                   │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh cat /etc/hostname                                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ mount          │ -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount2 --alsologtostderr -v=1 │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ ssh            │ functional-447073 ssh findmnt -T /mount1                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ mount          │ -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount1 --alsologtostderr -v=1 │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ ssh            │ functional-447073 ssh sudo cat /etc/test/nested/copy/371331/hosts                                                  │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ addons         │ functional-447073 addons list                                                                                      │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ addons         │ functional-447073 addons list -o json                                                                              │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh findmnt -T /mount1                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh findmnt -T /mount2                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh findmnt -T /mount3                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ mount          │ -p functional-447073 --kill=true                                                                                   │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ service        │ functional-447073 service hello-node-connect --url                                                                 │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format short --alsologtostderr                                                        │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format json --alsologtostderr                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format table --alsologtostderr                                                        │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format yaml --alsologtostderr                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ ssh            │ functional-447073 ssh pgrep buildkitd                                                                              │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ image          │ functional-447073 image build -t localhost/my-image:functional-447073 testdata/build --alsologtostderr             │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls                                                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ update-context │ functional-447073 update-context --alsologtostderr -v=2                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ update-context │ functional-447073 update-context --alsologtostderr -v=2                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ update-context │ functional-447073 update-context --alsologtostderr -v=2                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:26:50
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:26:50.220293  381001 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:26:50.220471  381001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:26:50.220487  381001 out.go:374] Setting ErrFile to fd 2...
	I1025 09:26:50.220494  381001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:26:50.220916  381001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 09:26:50.221572  381001 out.go:368] Setting JSON to false
	I1025 09:26:50.222863  381001 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4152,"bootTime":1761380258,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:26:50.223001  381001 start.go:141] virtualization: kvm guest
	I1025 09:26:50.224795  381001 out.go:179] * [functional-447073] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:26:50.226216  381001 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:26:50.226244  381001 notify.go:220] Checking for updates...
	I1025 09:26:50.228265  381001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:26:50.229474  381001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 09:26:50.230673  381001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 09:26:50.231736  381001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:26:50.232879  381001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:26:50.234299  381001 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:26:50.234772  381001 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:26:50.271376  381001 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1025 09:26:50.272952  381001 start.go:305] selected driver: kvm2
	I1025 09:26:50.272970  381001 start.go:925] validating driver "kvm2" against &{Name:functional-447073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-447073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:26:50.273073  381001 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:26:50.275002  381001 out.go:203] 
	W1025 09:26:50.276433  381001 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:26:50.278578  381001 out.go:203] 
	
	
	==> Docker <==
	Oct 25 09:27:37 functional-447073 dockerd[6728]: time="2025-10-25T09:27:37.355435743Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:27:37 functional-447073 dockerd[6728]: time="2025-10-25T09:27:37.841073583Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:43 functional-447073 dockerd[6728]: time="2025-10-25T09:27:43.378185569Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:43 functional-447073 cri-dockerd[7606]: time="2025-10-25T09:27:43Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Oct 25 09:27:52 functional-447073 dockerd[6728]: time="2025-10-25T09:27:52.113030399Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:24 functional-447073 dockerd[6728]: time="2025-10-25T09:28:24.349728013Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:28:24 functional-447073 dockerd[6728]: time="2025-10-25T09:28:24.833330000Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:32 functional-447073 dockerd[6728]: time="2025-10-25T09:28:32.345056635Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:28:32 functional-447073 dockerd[6728]: time="2025-10-25T09:28:32.831404480Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:36 functional-447073 dockerd[6728]: time="2025-10-25T09:28:36.101386579Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:43 functional-447073 dockerd[6728]: time="2025-10-25T09:28:43.094488874Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:29:56 functional-447073 dockerd[6728]: time="2025-10-25T09:29:56.350076943Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:29:56 functional-447073 dockerd[6728]: time="2025-10-25T09:29:56.837007356Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:30:02 functional-447073 dockerd[6728]: time="2025-10-25T09:30:02.346878112Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:30:03 functional-447073 dockerd[6728]: time="2025-10-25T09:30:03.127942932Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:30:03 functional-447073 cri-dockerd[7606]: time="2025-10-25T09:30:03Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Oct 25 09:30:08 functional-447073 dockerd[6728]: time="2025-10-25T09:30:08.132928928Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:30:16 functional-447073 dockerd[6728]: time="2025-10-25T09:30:16.136595644Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:50 functional-447073 dockerd[6728]: time="2025-10-25T09:32:50.352918707Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:32:51 functional-447073 dockerd[6728]: time="2025-10-25T09:32:51.141542625Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:51 functional-447073 cri-dockerd[7606]: time="2025-10-25T09:32:51Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Oct 25 09:32:51 functional-447073 dockerd[6728]: time="2025-10-25T09:32:51.383268438Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:32:51 functional-447073 dockerd[6728]: time="2025-10-25T09:32:51.870574224Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:56 functional-447073 dockerd[6728]: time="2025-10-25T09:32:56.168852813Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:59 functional-447073 dockerd[6728]: time="2025-10-25T09:32:59.106224475Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bec3e541755dd       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   b670aab77940a       hello-node-connect-7d85dfc575-55ktk
	ef24ad09815d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              mount-munger              0                   f8a0438a40181       busybox-mount
	410b2328e566f       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   22e650fd2b6a5       hello-node-75c85bcc94-lfbb2
	905d241ed9acf       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   03a4522316094       coredns-66bc5c9577-gs9ll
	ca7033bc221e8       fc25172553d79                                                                                         6 minutes ago       Running             kube-proxy                3                   9d0b19428b8a9       kube-proxy-dn86t
	ecca13c086d2d       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       4                   9010440b6f402       storage-provisioner
	db59fa10501d3       c3994bc696102                                                                                         6 minutes ago       Running             kube-apiserver            0                   423851c495f20       kube-apiserver-functional-447073
	20ee75adca2d4       c80c8dbafe7dd                                                                                         6 minutes ago       Running             kube-controller-manager   3                   4d192cdbeb7b7       kube-controller-manager-functional-447073
	e2de07c1e9692       7dd6aaa1717ab                                                                                         6 minutes ago       Running             kube-scheduler            3                   f05d7ba906312       kube-scheduler-functional-447073
	4d8d2f350016d       5f1f5298c888d                                                                                         6 minutes ago       Running             etcd                      2                   8df2a89e1da3c       etcd-functional-447073
	c528e6a20a051       7dd6aaa1717ab                                                                                         6 minutes ago       Exited              kube-scheduler            2                   b19de97ddd2c0       kube-scheduler-functional-447073
	e34b4f4904825       c80c8dbafe7dd                                                                                         6 minutes ago       Exited              kube-controller-manager   2                   6ce6473a977f6       kube-controller-manager-functional-447073
	87f021b308baf       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       3                   2119e1d5b5e1b       storage-provisioner
	5cc934531c377       fc25172553d79                                                                                         6 minutes ago       Exited              kube-proxy                2                   b64ce78b38364       kube-proxy-dn86t
	abebe2c25d10b       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   3fdc0f4fcd22b       coredns-66bc5c9577-gs9ll
	9aae27d2a9866       5f1f5298c888d                                                                                         7 minutes ago       Exited              etcd                      1                   88c235d505b1d       etcd-functional-447073
	
	
	==> coredns [905d241ed9ac] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48904 - 16127 "HINFO IN 4123962544261759008.795220563287131680. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.071803834s
	
	
	==> coredns [abebe2c25d10] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51408 - 37004 "HINFO IN 8190550577693892366.537718483269858856. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.072993556s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-447073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-447073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=functional-447073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_23_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:23:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-447073
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:33:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:31:10 +0000   Sat, 25 Oct 2025 09:23:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:31:10 +0000   Sat, 25 Oct 2025 09:23:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:31:10 +0000   Sat, 25 Oct 2025 09:23:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:31:10 +0000   Sat, 25 Oct 2025 09:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    functional-447073
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b2e995ea732422c9b8a28d66c0cc0f7
	  System UUID:                0b2e995e-a732-422c-9b8a-28d66c0cc0f7
	  Boot ID:                    11080a84-7c92-4095-a589-4b4722f040a1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-lfbb2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  default                     hello-node-connect-7d85dfc575-55ktk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     mysql-5bb876957f-dtkfb                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-gs9ll                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m1s
	  kube-system                 etcd-functional-447073                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m7s
	  kube-system                 kube-apiserver-functional-447073              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-controller-manager-functional-447073     200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m7s
	  kube-system                 kube-proxy-dn86t                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m2s
	  kube-system                 kube-scheduler-functional-447073              100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4wkrm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2k76x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m39s                  kube-proxy       
	  Normal   Starting                 7m39s                  kube-proxy       
	  Normal   Starting                 8m59s                  kube-proxy       
	  Normal   Starting                 9m7s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m7s                   kubelet          Node functional-447073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m7s                   kubelet          Node functional-447073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m7s                   kubelet          Node functional-447073 status is now: NodeHasSufficientPID
	  Normal   NodeReady                9m5s                   kubelet          Node functional-447073 status is now: NodeReady
	  Normal   RegisteredNode           9m3s                   node-controller  Node functional-447073 event: Registered Node functional-447073 in Controller
	  Warning  ContainerGCFailed        8m7s                   kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   Starting                 7m45s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  7m45s (x8 over 7m45s)  kubelet          Node functional-447073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m45s (x8 over 7m45s)  kubelet          Node functional-447073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m45s (x7 over 7m45s)  kubelet          Node functional-447073 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           7m38s                  node-controller  Node functional-447073 event: Registered Node functional-447073 in Controller
	  Normal   Starting                 6m44s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m44s (x8 over 6m44s)  kubelet          Node functional-447073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m44s (x8 over 6m44s)  kubelet          Node functional-447073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m44s (x7 over 6m44s)  kubelet          Node functional-447073 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m37s                  node-controller  Node functional-447073 event: Registered Node functional-447073 in Controller
	
	
	==> dmesg <==
	[  +1.174406] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.115483] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.116208] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.099021] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.143359] kauditd_printk_skb: 165 callbacks suppressed
	[Oct25 09:24] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.912852] kauditd_printk_skb: 214 callbacks suppressed
	[  +0.169938] kauditd_printk_skb: 12 callbacks suppressed
	[Oct25 09:25] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.469066] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.106289] kauditd_printk_skb: 218 callbacks suppressed
	[  +0.323254] kauditd_printk_skb: 234 callbacks suppressed
	[  +4.469445] kauditd_printk_skb: 17 callbacks suppressed
	[Oct25 09:26] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.497036] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.121277] kauditd_printk_skb: 466 callbacks suppressed
	[  +0.233285] kauditd_printk_skb: 174 callbacks suppressed
	[ +16.977484] kauditd_printk_skb: 17 callbacks suppressed
	[  +1.222076] kauditd_printk_skb: 133 callbacks suppressed
	[  +1.004494] kauditd_printk_skb: 172 callbacks suppressed
	[Oct25 09:27] kauditd_printk_skb: 80 callbacks suppressed
	[  +3.895106] crun[12285]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.000042] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [4d8d2f350016] <==
	{"level":"warn","ts":"2025-10-25T09:26:23.679015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.702315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.709649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.718819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.737354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.758875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.767706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.779301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.791931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.804457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.813676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.823099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.841377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.844791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.858190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.867826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.878826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.902067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.902334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.918936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.927854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.938312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.951399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.974176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:24.021985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45934","server-name":"","error":"EOF"}
	
	
	==> etcd [9aae27d2a986] <==
	{"level":"warn","ts":"2025-10-25T09:25:23.452417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.464651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.483282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.494583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.503664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.519803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.601127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38398","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:26:03.628328Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T09:26:03.628431Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-447073","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	{"level":"error","ts":"2025-10-25T09:26:03.628598Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:26:10.634879Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:26:10.638351Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.638454Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f21a8e08563785d2","current-leader-member-id":"f21a8e08563785d2"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638558Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638597Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:26:10.638604Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.638623Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-25T09:26:10.638638Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638678Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638685Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:26:10.638691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.191:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.642995Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"error","ts":"2025-10-25T09:26:10.643056Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.191:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.643078Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2025-10-25T09:26:10.643085Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-447073","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	
	
	==> kernel <==
	 09:33:05 up 9 min,  0 users,  load average: 0.68, 0.51, 0.27
	Linux functional-447073 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [db59fa10501d] <==
	I1025 09:26:24.745586       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:26:24.745772       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 09:26:24.746362       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:26:24.748073       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:26:24.748276       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:26:24.748468       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:26:24.754999       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:26:24.757183       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:26:24.761234       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:26:25.139456       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:26:25.558807       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:26:26.299429       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:26:26.350412       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:26:26.381841       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:26:26.389277       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:26:28.111401       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:26:28.313975       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:26:43.543969       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.57.151"}
	I1025 09:26:47.768251       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:26:47.882681       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.253.63"}
	I1025 09:26:51.629404       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:26:52.044465       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.127.34"}
	I1025 09:26:52.063664       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.8.206"}
	I1025 09:26:58.373740       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.250.97"}
	I1025 09:26:59.226227       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.146.159"}
	
	
	==> kube-controller-manager [20ee75adca2d] <==
	I1025 09:26:28.032457       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:26:28.032467       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:26:28.035335       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:26:28.035429       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:26:28.035472       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:26:28.035490       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:26:28.035495       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:26:28.040154       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:26:28.040192       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:26:28.040210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:26:28.040226       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:26:28.042592       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:26:28.042682       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:26:28.042986       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-447073"
	I1025 09:26:28.043119       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:26:28.048498       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:26:28.051309       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:26:28.057190       1 shared_informer.go:356] "Caches are synced" controller="GC"
	E1025 09:26:51.769774       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.782545       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.795629       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.797656       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.811269       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.852314       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.883342       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e34b4f490482] <==
	I1025 09:26:17.106425       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-proxy [5cc934531c37] <==
	I1025 09:26:16.365383       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:26:16.435233       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1025 09:26:16.440557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-447073&limit=500&resourceVersion=0\": dial tcp 192.168.39.191:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [ca7033bc221e] <==
	I1025 09:26:26.131945       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:26:26.232977       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:26:26.233254       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.191"]
	E1025 09:26:26.233581       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:26:26.292159       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1025 09:26:26.293179       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 09:26:26.293951       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:26:26.307133       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:26:26.307837       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:26:26.307850       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:26:26.314195       1 config.go:200] "Starting service config controller"
	I1025 09:26:26.314226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:26:26.314243       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:26:26.314246       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:26:26.314260       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:26:26.314263       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:26:26.315395       1 config.go:309] "Starting node config controller"
	I1025 09:26:26.315422       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:26:26.315428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:26:26.414802       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:26:26.416068       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:26:26.416103       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c528e6a20a05] <==
	
	
	==> kube-scheduler [e2de07c1e969] <==
	I1025 09:26:23.340097       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:26:24.611670       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:26:24.611706       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:26:24.611715       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:26:24.611721       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:26:24.675112       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:26:24.675148       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:26:24.683687       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:26:24.684140       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:26:24.684491       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:26:24.684696       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:26:24.785191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:32:24 functional-447073 kubelet[9062]: E1025 09:32:24.106334    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:32:25 functional-447073 kubelet[9062]: E1025 09:32:25.107291    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:32:29 functional-447073 kubelet[9062]: E1025 09:32:29.111796    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:32:30 functional-447073 kubelet[9062]: E1025 09:32:30.106006    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:32:35 functional-447073 kubelet[9062]: E1025 09:32:35.106254    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:32:39 functional-447073 kubelet[9062]: E1025 09:32:39.112751    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:32:42 functional-447073 kubelet[9062]: E1025 09:32:42.106948    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:32:43 functional-447073 kubelet[9062]: E1025 09:32:43.103955    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:32:51 functional-447073 kubelet[9062]: E1025 09:32:51.146402    9062 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:32:51 functional-447073 kubelet[9062]: E1025 09:32:51.146451    9062 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:32:51 functional-447073 kubelet[9062]: E1025 09:32:51.146690    9062 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-2k76x_kubernetes-dashboard(bc0bff4e-676e-4de8-8733-3690e3f1c32a): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 25 09:32:51 functional-447073 kubelet[9062]: E1025 09:32:51.146739    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:32:51 functional-447073 kubelet[9062]: E1025 09:32:51.876465    9062 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:32:51 functional-447073 kubelet[9062]: E1025 09:32:51.876642    9062 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:32:51 functional-447073 kubelet[9062]: E1025 09:32:51.876874    9062 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm_kubernetes-dashboard(2353cbd7-6db4-478f-8b7f-3d7011346eb4): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 25 09:32:51 functional-447073 kubelet[9062]: E1025 09:32:51.876931    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:32:56 functional-447073 kubelet[9062]: E1025 09:32:56.173079    9062 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 25 09:32:56 functional-447073 kubelet[9062]: E1025 09:32:56.173136    9062 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 25 09:32:56 functional-447073 kubelet[9062]: E1025 09:32:56.173543    9062 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-dtkfb_default(f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 25 09:32:56 functional-447073 kubelet[9062]: E1025 09:32:56.173579    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:32:59 functional-447073 kubelet[9062]: E1025 09:32:59.114309    9062 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 25 09:32:59 functional-447073 kubelet[9062]: E1025 09:32:59.114746    9062 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 25 09:32:59 functional-447073 kubelet[9062]: E1025 09:32:59.115271    9062 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(4d512914-9461-4c84-9831-d6966d601a40): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 25 09:32:59 functional-447073 kubelet[9062]: E1025 09:32:59.115681    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:33:04 functional-447073 kubelet[9062]: E1025 09:33:04.107997    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	
	
	==> storage-provisioner [87f021b308ba] <==
	I1025 09:26:16.508377       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:26:16.513155       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [ecca13c086d2] <==
	W1025 09:32:41.028911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:43.032350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:43.037596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:45.040827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:45.046143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:47.049768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:47.057963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:49.061494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:49.067625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:51.071652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:51.076816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:53.080179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:53.085674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:55.089246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:55.098589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:57.102182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:57.110680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:59.119619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:32:59.127136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:01.133092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:01.141699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:03.145421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:03.151406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:05.156018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:05.161685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-447073 -n functional-447073
helpers_test.go:269: (dbg) Run:  kubectl --context functional-447073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-447073 describe pod busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-447073 describe pod busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x: exit status 1 (87.742835ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-447073/192.168.39.191
	Start Time:       Sat, 25 Oct 2025 09:26:50 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  docker://ef24ad09815d17db8560c2d6e888f75ad6694f3e9beecdee1e27c0a47ad08c06
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 25 Oct 2025 09:26:53 +0000
	      Finished:     Sat, 25 Oct 2025 09:26:53 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tvcbl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tvcbl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m15s  default-scheduler  Successfully assigned default/busybox-mount to functional-447073
	  Normal  Pulling    6m15s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m13s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.123s (2.123s including waiting). Image size: 4403845 bytes.
	  Normal  Created    6m13s  kubelet            Created container: mount-munger
	  Normal  Started    6m13s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-dtkfb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-447073/192.168.39.191
	Start Time:       Sat, 25 Oct 2025 09:26:58 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r4wbt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r4wbt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m8s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-dtkfb to functional-447073
	  Warning  Failed     5m23s                 kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m59s (x5 over 6m7s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m58s (x4 over 6m6s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m58s (x5 over 6m6s)  kubelet            Error: ErrImagePull
	  Warning  Failed     62s (x20 over 6m5s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    47s (x21 over 6m5s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-447073/192.168.39.191
	Start Time:       Sat, 25 Oct 2025 09:27:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vpx5h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vpx5h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m1s                  default-scheduler  Successfully assigned default/sp-pod to functional-447073
	  Normal   Pulling    2m51s (x5 over 6m1s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m50s (x5 over 6m)    kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m50s (x5 over 6m)    kubelet            Error: ErrImagePull
	  Warning  Failed     59s (x20 over 6m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    48s (x21 over 6m)     kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4wkrm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-2k76x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-447073 describe pod busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x: exit status 1
E1025 09:35:36.551184  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (369.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-447073 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
E1025 09:26:58.492938  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "mysql-5bb876957f-dtkfb" [f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-447073 -n functional-447073
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-10-25 09:36:58.66025552 +0000 UTC m=+1524.784874656
functional_test.go:1804: (dbg) Run:  kubectl --context functional-447073 describe po mysql-5bb876957f-dtkfb -n default
functional_test.go:1804: (dbg) kubectl --context functional-447073 describe po mysql-5bb876957f-dtkfb -n default:
Name:             mysql-5bb876957f-dtkfb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-447073/192.168.39.191
Start Time:       Sat, 25 Oct 2025 09:26:58 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r4wbt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r4wbt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-dtkfb to functional-447073
Warning  Failed     9m15s                   kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    6m51s (x5 over 9m59s)   kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     6m50s (x4 over 9m58s)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m50s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m39s (x21 over 9m57s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-447073 logs mysql-5bb876957f-dtkfb -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-447073 logs mysql-5bb876957f-dtkfb -n default: exit status 1 (69.509561ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-dtkfb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-447073 logs mysql-5bb876957f-dtkfb -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-447073 -n functional-447073
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 logs -n 25
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service        │ functional-447073 service hello-node --url                                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh sudo umount -f /mount-9p                                                                     │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ ssh            │ functional-447073 ssh echo hello                                                                                   │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh cat /etc/hostname                                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ mount          │ -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount2 --alsologtostderr -v=1 │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ ssh            │ functional-447073 ssh findmnt -T /mount1                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ mount          │ -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount1 --alsologtostderr -v=1 │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ ssh            │ functional-447073 ssh sudo cat /etc/test/nested/copy/371331/hosts                                                  │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ addons         │ functional-447073 addons list                                                                                      │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ addons         │ functional-447073 addons list -o json                                                                              │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh findmnt -T /mount1                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh findmnt -T /mount2                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ ssh            │ functional-447073 ssh findmnt -T /mount3                                                                           │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │ 25 Oct 25 09:26 UTC │
	│ mount          │ -p functional-447073 --kill=true                                                                                   │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ service        │ functional-447073 service hello-node-connect --url                                                                 │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format short --alsologtostderr                                                        │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format json --alsologtostderr                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format table --alsologtostderr                                                        │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls --format yaml --alsologtostderr                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ ssh            │ functional-447073 ssh pgrep buildkitd                                                                              │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ image          │ functional-447073 image build -t localhost/my-image:functional-447073 testdata/build --alsologtostderr             │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ image          │ functional-447073 image ls                                                                                         │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ update-context │ functional-447073 update-context --alsologtostderr -v=2                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ update-context │ functional-447073 update-context --alsologtostderr -v=2                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ update-context │ functional-447073 update-context --alsologtostderr -v=2                                                            │ functional-447073 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:26:50
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:26:50.220293  381001 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:26:50.220471  381001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:26:50.220487  381001 out.go:374] Setting ErrFile to fd 2...
	I1025 09:26:50.220494  381001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:26:50.220916  381001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 09:26:50.221572  381001 out.go:368] Setting JSON to false
	I1025 09:26:50.222863  381001 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4152,"bootTime":1761380258,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:26:50.223001  381001 start.go:141] virtualization: kvm guest
	I1025 09:26:50.224795  381001 out.go:179] * [functional-447073] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:26:50.226216  381001 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:26:50.226244  381001 notify.go:220] Checking for updates...
	I1025 09:26:50.228265  381001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:26:50.229474  381001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 09:26:50.230673  381001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 09:26:50.231736  381001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:26:50.232879  381001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:26:50.234299  381001 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:26:50.234772  381001 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:26:50.271376  381001 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1025 09:26:50.272952  381001 start.go:305] selected driver: kvm2
	I1025 09:26:50.272970  381001 start.go:925] validating driver "kvm2" against &{Name:functional-447073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-447073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:26:50.273073  381001 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:26:50.275002  381001 out.go:203] 
	W1025 09:26:50.276433  381001 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:26:50.278578  381001 out.go:203] 
	
	
	==> Docker <==
	Oct 25 09:27:37 functional-447073 dockerd[6728]: time="2025-10-25T09:27:37.355435743Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:27:37 functional-447073 dockerd[6728]: time="2025-10-25T09:27:37.841073583Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:43 functional-447073 dockerd[6728]: time="2025-10-25T09:27:43.378185569Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:27:43 functional-447073 cri-dockerd[7606]: time="2025-10-25T09:27:43Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Oct 25 09:27:52 functional-447073 dockerd[6728]: time="2025-10-25T09:27:52.113030399Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:24 functional-447073 dockerd[6728]: time="2025-10-25T09:28:24.349728013Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:28:24 functional-447073 dockerd[6728]: time="2025-10-25T09:28:24.833330000Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:32 functional-447073 dockerd[6728]: time="2025-10-25T09:28:32.345056635Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:28:32 functional-447073 dockerd[6728]: time="2025-10-25T09:28:32.831404480Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:36 functional-447073 dockerd[6728]: time="2025-10-25T09:28:36.101386579Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:28:43 functional-447073 dockerd[6728]: time="2025-10-25T09:28:43.094488874Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:29:56 functional-447073 dockerd[6728]: time="2025-10-25T09:29:56.350076943Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:29:56 functional-447073 dockerd[6728]: time="2025-10-25T09:29:56.837007356Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:30:02 functional-447073 dockerd[6728]: time="2025-10-25T09:30:02.346878112Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:30:03 functional-447073 dockerd[6728]: time="2025-10-25T09:30:03.127942932Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:30:03 functional-447073 cri-dockerd[7606]: time="2025-10-25T09:30:03Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Oct 25 09:30:08 functional-447073 dockerd[6728]: time="2025-10-25T09:30:08.132928928Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:30:16 functional-447073 dockerd[6728]: time="2025-10-25T09:30:16.136595644Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:50 functional-447073 dockerd[6728]: time="2025-10-25T09:32:50.352918707Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:32:51 functional-447073 dockerd[6728]: time="2025-10-25T09:32:51.141542625Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:51 functional-447073 cri-dockerd[7606]: time="2025-10-25T09:32:51Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Oct 25 09:32:51 functional-447073 dockerd[6728]: time="2025-10-25T09:32:51.383268438Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:32:51 functional-447073 dockerd[6728]: time="2025-10-25T09:32:51.870574224Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:56 functional-447073 dockerd[6728]: time="2025-10-25T09:32:56.168852813Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:59 functional-447073 dockerd[6728]: time="2025-10-25T09:32:59.106224475Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bec3e541755dd       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           9 minutes ago       Running             echo-server               0                   b670aab77940a       hello-node-connect-7d85dfc575-55ktk
	ef24ad09815d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   f8a0438a40181       busybox-mount
	410b2328e566f       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   22e650fd2b6a5       hello-node-75c85bcc94-lfbb2
	905d241ed9acf       52546a367cc9e                                                                                         10 minutes ago      Running             coredns                   2                   03a4522316094       coredns-66bc5c9577-gs9ll
	ca7033bc221e8       fc25172553d79                                                                                         10 minutes ago      Running             kube-proxy                3                   9d0b19428b8a9       kube-proxy-dn86t
	ecca13c086d2d       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       4                   9010440b6f402       storage-provisioner
	db59fa10501d3       c3994bc696102                                                                                         10 minutes ago      Running             kube-apiserver            0                   423851c495f20       kube-apiserver-functional-447073
	20ee75adca2d4       c80c8dbafe7dd                                                                                         10 minutes ago      Running             kube-controller-manager   3                   4d192cdbeb7b7       kube-controller-manager-functional-447073
	e2de07c1e9692       7dd6aaa1717ab                                                                                         10 minutes ago      Running             kube-scheduler            3                   f05d7ba906312       kube-scheduler-functional-447073
	4d8d2f350016d       5f1f5298c888d                                                                                         10 minutes ago      Running             etcd                      2                   8df2a89e1da3c       etcd-functional-447073
	c528e6a20a051       7dd6aaa1717ab                                                                                         10 minutes ago      Exited              kube-scheduler            2                   b19de97ddd2c0       kube-scheduler-functional-447073
	e34b4f4904825       c80c8dbafe7dd                                                                                         10 minutes ago      Exited              kube-controller-manager   2                   6ce6473a977f6       kube-controller-manager-functional-447073
	87f021b308baf       6e38f40d628db                                                                                         10 minutes ago      Exited              storage-provisioner       3                   2119e1d5b5e1b       storage-provisioner
	5cc934531c377       fc25172553d79                                                                                         10 minutes ago      Exited              kube-proxy                2                   b64ce78b38364       kube-proxy-dn86t
	abebe2c25d10b       52546a367cc9e                                                                                         11 minutes ago      Exited              coredns                   1                   3fdc0f4fcd22b       coredns-66bc5c9577-gs9ll
	9aae27d2a9866       5f1f5298c888d                                                                                         11 minutes ago      Exited              etcd                      1                   88c235d505b1d       etcd-functional-447073
	
	
	==> coredns [905d241ed9ac] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48904 - 16127 "HINFO IN 4123962544261759008.795220563287131680. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.071803834s
	
	
	==> coredns [abebe2c25d10] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51408 - 37004 "HINFO IN 8190550577693892366.537718483269858856. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.072993556s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-447073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-447073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=functional-447073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_23_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:23:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-447073
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:36:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:36:16 +0000   Sat, 25 Oct 2025 09:23:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:36:16 +0000   Sat, 25 Oct 2025 09:23:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:36:16 +0000   Sat, 25 Oct 2025 09:23:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:36:16 +0000   Sat, 25 Oct 2025 09:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    functional-447073
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b2e995ea732422c9b8a28d66c0cc0f7
	  System UUID:                0b2e995e-a732-422c-9b8a-28d66c0cc0f7
	  Boot ID:                    11080a84-7c92-4095-a589-4b4722f040a1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-lfbb2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-55ktk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-dtkfb                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-gs9ll                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-447073                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-447073              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-447073     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-dn86t                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-447073              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4wkrm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2k76x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-447073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-447073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-447073 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                kubelet          Node functional-447073 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node functional-447073 event: Registered Node functional-447073 in Controller
	  Warning  ContainerGCFailed        12m                kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-447073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-447073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-447073 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node functional-447073 event: Registered Node functional-447073 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-447073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-447073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-447073 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-447073 event: Registered Node functional-447073 in Controller
	
	
	==> dmesg <==
	[  +1.174406] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.115483] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.116208] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.099021] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.143359] kauditd_printk_skb: 165 callbacks suppressed
	[Oct25 09:24] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.912852] kauditd_printk_skb: 214 callbacks suppressed
	[  +0.169938] kauditd_printk_skb: 12 callbacks suppressed
	[Oct25 09:25] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.469066] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.106289] kauditd_printk_skb: 218 callbacks suppressed
	[  +0.323254] kauditd_printk_skb: 234 callbacks suppressed
	[  +4.469445] kauditd_printk_skb: 17 callbacks suppressed
	[Oct25 09:26] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.497036] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.121277] kauditd_printk_skb: 466 callbacks suppressed
	[  +0.233285] kauditd_printk_skb: 174 callbacks suppressed
	[ +16.977484] kauditd_printk_skb: 17 callbacks suppressed
	[  +1.222076] kauditd_printk_skb: 133 callbacks suppressed
	[  +1.004494] kauditd_printk_skb: 172 callbacks suppressed
	[Oct25 09:27] kauditd_printk_skb: 80 callbacks suppressed
	[  +3.895106] crun[12285]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.000042] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [4d8d2f350016] <==
	{"level":"warn","ts":"2025-10-25T09:26:23.718819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.737354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.758875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.767706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.779301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.791931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.804457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.813676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.823099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.841377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.844791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.858190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.867826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.878826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.902067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.902334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.918936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.927854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.938312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.951399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:23.974176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:26:24.021985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45934","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:36:23.228855Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1286}
	{"level":"info","ts":"2025-10-25T09:36:23.255274Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1286,"took":"25.503735ms","hash":3962070551,"current-db-size-bytes":3772416,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1851392,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-10-25T09:36:23.255313Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3962070551,"revision":1286,"compact-revision":-1}
	
	
	==> etcd [9aae27d2a986] <==
	{"level":"warn","ts":"2025-10-25T09:25:23.452417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.464651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.483282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.494583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.503664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.519803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:25:23.601127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38398","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:26:03.628328Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T09:26:03.628431Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-447073","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	{"level":"error","ts":"2025-10-25T09:26:03.628598Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:26:10.634879Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:26:10.638351Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.638454Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f21a8e08563785d2","current-leader-member-id":"f21a8e08563785d2"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638558Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638597Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:26:10.638604Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.638623Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-25T09:26:10.638638Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638678Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:26:10.638685Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:26:10.638691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.191:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.642995Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"error","ts":"2025-10-25T09:26:10.643056Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.191:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:26:10.643078Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2025-10-25T09:26:10.643085Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-447073","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	
	
	==> kernel <==
	 09:36:59 up 13 min,  0 users,  load average: 0.07, 0.31, 0.25
	Linux functional-447073 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [db59fa10501d] <==
	I1025 09:26:24.745772       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 09:26:24.746362       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:26:24.748073       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:26:24.748276       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:26:24.748468       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:26:24.754999       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:26:24.757183       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:26:24.761234       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:26:25.139456       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:26:25.558807       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:26:26.299429       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:26:26.350412       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:26:26.381841       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:26:26.389277       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:26:28.111401       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:26:28.313975       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:26:43.543969       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.57.151"}
	I1025 09:26:47.768251       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:26:47.882681       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.253.63"}
	I1025 09:26:51.629404       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:26:52.044465       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.127.34"}
	I1025 09:26:52.063664       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.8.206"}
	I1025 09:26:58.373740       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.250.97"}
	I1025 09:26:59.226227       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.146.159"}
	I1025 09:36:24.654419       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [20ee75adca2d] <==
	I1025 09:26:28.032457       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:26:28.032467       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:26:28.035335       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:26:28.035429       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:26:28.035472       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:26:28.035490       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:26:28.035495       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:26:28.040154       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:26:28.040192       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:26:28.040210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:26:28.040226       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:26:28.042592       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:26:28.042682       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:26:28.042986       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-447073"
	I1025 09:26:28.043119       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:26:28.048498       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:26:28.051309       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:26:28.057190       1 shared_informer.go:356] "Caches are synced" controller="GC"
	E1025 09:26:51.769774       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.782545       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.795629       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.797656       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.811269       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.852314       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:26:51.883342       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e34b4f490482] <==
	I1025 09:26:17.106425       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-proxy [5cc934531c37] <==
	I1025 09:26:16.365383       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:26:16.435233       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1025 09:26:16.440557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-447073&limit=500&resourceVersion=0\": dial tcp 192.168.39.191:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [ca7033bc221e] <==
	I1025 09:26:26.131945       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:26:26.232977       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:26:26.233254       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.191"]
	E1025 09:26:26.233581       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:26:26.292159       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1025 09:26:26.293179       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 09:26:26.293951       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:26:26.307133       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:26:26.307837       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:26:26.307850       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:26:26.314195       1 config.go:200] "Starting service config controller"
	I1025 09:26:26.314226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:26:26.314243       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:26:26.314246       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:26:26.314260       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:26:26.314263       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:26:26.315395       1 config.go:309] "Starting node config controller"
	I1025 09:26:26.315422       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:26:26.315428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:26:26.414802       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:26:26.416068       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:26:26.416103       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c528e6a20a05] <==
	
	
	==> kube-scheduler [e2de07c1e969] <==
	I1025 09:26:23.340097       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:26:24.611670       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:26:24.611706       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:26:24.611715       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:26:24.611721       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:26:24.675112       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:26:24.675148       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:26:24.683687       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:26:24.684140       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:26:24.684491       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:26:24.684696       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:26:24.785191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:35:43 functional-447073 kubelet[9062]: E1025 09:35:43.107302    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:35:46 functional-447073 kubelet[9062]: E1025 09:35:46.104871    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:35:49 functional-447073 kubelet[9062]: E1025 09:35:49.107978    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:35:52 functional-447073 kubelet[9062]: E1025 09:35:52.106417    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:35:56 functional-447073 kubelet[9062]: E1025 09:35:56.107476    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:36:01 functional-447073 kubelet[9062]: E1025 09:36:01.104478    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:36:03 functional-447073 kubelet[9062]: E1025 09:36:03.112109    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:36:04 functional-447073 kubelet[9062]: E1025 09:36:04.106403    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:36:08 functional-447073 kubelet[9062]: E1025 09:36:08.106537    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:36:12 functional-447073 kubelet[9062]: E1025 09:36:12.104360    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:36:15 functional-447073 kubelet[9062]: E1025 09:36:15.115649    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:36:15 functional-447073 kubelet[9062]: E1025 09:36:15.116700    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:36:21 functional-447073 kubelet[9062]: E1025 09:36:21.106285    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:36:25 functional-447073 kubelet[9062]: E1025 09:36:25.104346    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:36:27 functional-447073 kubelet[9062]: E1025 09:36:27.106464    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:36:28 functional-447073 kubelet[9062]: E1025 09:36:28.112168    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:36:33 functional-447073 kubelet[9062]: E1025 09:36:33.106269    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:36:40 functional-447073 kubelet[9062]: E1025 09:36:40.104033    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:36:40 functional-447073 kubelet[9062]: E1025 09:36:40.110355    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:36:40 functional-447073 kubelet[9062]: E1025 09:36:40.110961    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:36:44 functional-447073 kubelet[9062]: E1025 09:36:44.106217    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	Oct 25 09:36:51 functional-447073 kubelet[9062]: E1025 09:36:51.112751    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2k76x" podUID="bc0bff4e-676e-4de8-8733-3690e3f1c32a"
	Oct 25 09:36:54 functional-447073 kubelet[9062]: E1025 09:36:54.106552    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4wkrm" podUID="2353cbd7-6db4-478f-8b7f-3d7011346eb4"
	Oct 25 09:36:55 functional-447073 kubelet[9062]: E1025 09:36:55.104144    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d512914-9461-4c84-9831-d6966d601a40"
	Oct 25 09:36:59 functional-447073 kubelet[9062]: E1025 09:36:59.106563    9062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dtkfb" podUID="f5239928-fb9e-48fa-b7e6-adc7b4ec2c3e"
	
	
	==> storage-provisioner [87f021b308ba] <==
	I1025 09:26:16.508377       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:26:16.513155       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [ecca13c086d2] <==
	W1025 09:36:34.267813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:36.271910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:36.277022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:38.281238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:38.287304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:40.292041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:40.297576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:42.301387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:42.309455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:44.312359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:44.317471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:46.321046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:46.325646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:48.329018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:48.334754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:50.338309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:50.346625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:52.349711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:52.355367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:54.358573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:54.364808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:56.369621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:56.380407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:58.384150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:58.393390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-447073 -n functional-447073
helpers_test.go:269: (dbg) Run:  kubectl --context functional-447073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-447073 describe pod busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-447073 describe pod busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x: exit status 1 (83.413409ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-447073/192.168.39.191
	Start Time:       Sat, 25 Oct 2025 09:26:50 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  docker://ef24ad09815d17db8560c2d6e888f75ad6694f3e9beecdee1e27c0a47ad08c06
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 25 Oct 2025 09:26:53 +0000
	      Finished:     Sat, 25 Oct 2025 09:26:53 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tvcbl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tvcbl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-447073
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.123s (2.123s including waiting). Image size: 4403845 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-dtkfb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-447073/192.168.39.191
	Start Time:       Sat, 25 Oct 2025 09:26:58 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r4wbt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r4wbt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-dtkfb to functional-447073
	  Warning  Failed     9m17s                   kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m53s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m52s (x4 over 10m)     kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m52s (x5 over 10m)     kubelet            Error: ErrImagePull
	  Warning  Failed     4m56s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x42 over 9m59s)     kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-447073/192.168.39.191
	Start Time:       Sat, 25 Oct 2025 09:27:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vpx5h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vpx5h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m55s                   default-scheduler  Successfully assigned default/sp-pod to functional-447073
	  Normal   Pulling    6m45s (x5 over 9m55s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m44s (x5 over 9m54s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m44s (x5 over 9m54s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m53s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m42s (x21 over 9m54s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4wkrm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-2k76x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-447073 describe pod busybox-mount mysql-5bb876957f-dtkfb sp-pod dashboard-metrics-scraper-77bf4d6c4c-4wkrm kubernetes-dashboard-855c9754f9-2k76x: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (39.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-019967 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-019967 --alsologtostderr -v=1: (1.753721356s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019967 -n old-k8s-version-019967
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019967 -n old-k8s-version-019967: exit status 2 (15.785412035s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-019967 -n old-k8s-version-019967
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-019967 -n old-k8s-version-019967: exit status 2 (15.806393094s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-019967 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019967 -n old-k8s-version-019967
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-019967 -n old-k8s-version-019967
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019967 -n old-k8s-version-019967
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-019967 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-019967 logs -n 25: (1.780296802s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-266353 sudo systemctl status cri-docker --all --full --no-pager                                                                                   │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo systemctl cat cri-docker --no-pager                                                                                                   │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                              │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                        │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo cri-dockerd --version                                                                                                                 │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo systemctl status containerd --all --full --no-pager                                                                                   │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo systemctl cat containerd --no-pager                                                                                                   │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo cat /lib/systemd/system/containerd.service                                                                                            │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo cat /etc/containerd/config.toml                                                                                                       │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo containerd config dump                                                                                                                │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo crio config                                                                                                                           │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ delete  │ -p cilium-266353                                                                                                                                            │ cilium-266353            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ start   │ -p gvisor-130661 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2                     │ gvisor-130661            │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p NoKubernetes-586342 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-586342      │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ image   │ old-k8s-version-019967 image list --format=json                                                                                                             │ old-k8s-version-019967   │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ pause   │ -p old-k8s-version-019967 --alsologtostderr -v=1                                                                                                            │ old-k8s-version-019967   │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ stop    │ -p NoKubernetes-586342                                                                                                                                      │ NoKubernetes-586342      │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p NoKubernetes-586342 --driver=kvm2                                                                                                                        │ NoKubernetes-586342      │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-595347 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-595347   │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ delete  │ -p running-upgrade-595347                                                                                                                                   │ running-upgrade-595347   │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p force-systemd-env-926084 --memory=3072 --alsologtostderr -v=5 --driver=kvm2                                                                              │ force-systemd-env-926084 │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ unpause │ -p old-k8s-version-019967 --alsologtostderr -v=1                                                                                                            │ old-k8s-version-019967   │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:14 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:13:38
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:13:38.782930  402781 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:13:38.783226  402781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:38.783237  402781 out.go:374] Setting ErrFile to fd 2...
	I1025 10:13:38.783243  402781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:38.783473  402781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 10:13:38.783978  402781 out.go:368] Setting JSON to false
	I1025 10:13:38.784956  402781 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6961,"bootTime":1761380258,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:13:38.785054  402781 start.go:141] virtualization: kvm guest
	I1025 10:13:38.787111  402781 out.go:179] * [force-systemd-env-926084] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:13:38.788298  402781 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:13:38.788310  402781 notify.go:220] Checking for updates...
	I1025 10:13:38.790246  402781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:13:38.791370  402781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 10:13:38.792493  402781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 10:13:38.793475  402781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:13:38.794506  402781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1025 10:13:38.795949  402781 config.go:182] Loaded profile config "NoKubernetes-586342": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1025 10:13:38.796070  402781 config.go:182] Loaded profile config "gvisor-130661": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1025 10:13:38.796173  402781 config.go:182] Loaded profile config "old-k8s-version-019967": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1025 10:13:38.796299  402781 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:13:38.830938  402781 out.go:179] * Using the kvm2 driver based on user configuration
	I1025 10:13:38.832132  402781 start.go:305] selected driver: kvm2
	I1025 10:13:38.832146  402781 start.go:925] validating driver "kvm2" against <nil>
	I1025 10:13:38.832159  402781 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:13:38.832883  402781 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:13:38.833135  402781 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 10:13:38.833165  402781 cni.go:84] Creating CNI manager for ""
	I1025 10:13:38.833253  402781 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 10:13:38.833264  402781 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 10:13:38.833306  402781 start.go:349] cluster config:
	{Name:force-systemd-env-926084 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-926084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:38.833410  402781 iso.go:125] acquiring lock: {Name:mkaf34b0e79311c874a9b61067611bc0cdebbfac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:38.834759  402781 out.go:179] * Starting "force-systemd-env-926084" primary control-plane node in "force-systemd-env-926084" cluster
	I1025 10:13:36.040220  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:36.040858  402024 main.go:141] libmachine: no network interface addresses found for domain gvisor-130661 (source=lease)
	I1025 10:13:36.040869  402024 main.go:141] libmachine: trying to list again with source=arp
	I1025 10:13:36.041228  402024 main.go:141] libmachine: unable to find current IP address of domain gvisor-130661 in network mk-gvisor-130661 (interfaces detected: [])
	I1025 10:13:36.041264  402024 retry.go:31] will retry after 4.512397635s: waiting for domain to come up
	I1025 10:13:40.558965  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:40.559668  402024 main.go:141] libmachine: domain gvisor-130661 has current primary IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:40.559677  402024 main.go:141] libmachine: found domain IP: 192.168.61.156
	I1025 10:13:40.559683  402024 main.go:141] libmachine: reserving static IP address...
	I1025 10:13:40.560164  402024 main.go:141] libmachine: unable to find host DHCP lease matching {name: "gvisor-130661", mac: "52:54:00:b8:b3:f3", ip: "192.168.61.156"} in network mk-gvisor-130661
	I1025 10:13:40.754353  402024 main.go:141] libmachine: reserved static IP address 192.168.61.156 for domain gvisor-130661
	I1025 10:13:40.754378  402024 main.go:141] libmachine: waiting for SSH...
	I1025 10:13:40.754384  402024 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 10:13:40.757452  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:40.757868  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:40.757887  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:40.758057  402024 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:40.758319  402024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.156 22 <nil> <nil>}
	I1025 10:13:40.758324  402024 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 10:13:40.864798  402024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:40.865182  402024 main.go:141] libmachine: domain creation complete
	I1025 10:13:40.866764  402024 machine.go:93] provisionDockerMachine start ...
	I1025 10:13:40.869183  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:40.869611  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:40.869628  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:40.869768  402024 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:40.869956  402024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.156 22 <nil> <nil>}
	I1025 10:13:40.869960  402024 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:13:40.974742  402024 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 10:13:40.974767  402024 buildroot.go:166] provisioning hostname "gvisor-130661"
	I1025 10:13:40.977888  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:40.978295  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:40.978319  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:40.978541  402024 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:40.978725  402024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.156 22 <nil> <nil>}
	I1025 10:13:40.978731  402024 main.go:141] libmachine: About to run SSH command:
	sudo hostname gvisor-130661 && echo "gvisor-130661" | sudo tee /etc/hostname
	I1025 10:13:42.301682  402613 start.go:364] duration metric: took 13.710390555s to acquireMachinesLock for "NoKubernetes-586342"
	I1025 10:13:42.301726  402613 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:13:42.301732  402613 fix.go:54] fixHost starting: 
	I1025 10:13:42.303890  402613 fix.go:112] recreateIfNeeded on NoKubernetes-586342: state=Stopped err=<nil>
	W1025 10:13:42.303911  402613 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:13:42.305888  402613 out.go:252] * Restarting existing kvm2 VM for "NoKubernetes-586342" ...
	I1025 10:13:42.305919  402613 main.go:141] libmachine: starting domain...
	I1025 10:13:42.305929  402613 main.go:141] libmachine: ensuring networks are active...
	I1025 10:13:42.306934  402613 main.go:141] libmachine: Ensuring network default is active
	I1025 10:13:42.307587  402613 main.go:141] libmachine: Ensuring network mk-NoKubernetes-586342 is active
	I1025 10:13:42.308526  402613 main.go:141] libmachine: getting domain XML...
	I1025 10:13:42.309819  402613 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>NoKubernetes-586342</name>
	  <uuid>c971bcb9-1045-42ca-86f2-5d0d067254f1</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21767-367343/.minikube/machines/NoKubernetes-586342/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21767-367343/.minikube/machines/NoKubernetes-586342/NoKubernetes-586342.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:3f:44:a0'/>
	      <source network='mk-NoKubernetes-586342'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:2d:db:e3'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1025 10:13:38.835887  402781 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 10:13:38.835925  402781 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1025 10:13:38.835936  402781 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:38.836023  402781 preload.go:233] Found /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 10:13:38.836036  402781 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1025 10:13:38.836137  402781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/force-systemd-env-926084/config.json ...
	I1025 10:13:38.836162  402781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/force-systemd-env-926084/config.json: {Name:mk025b208b32b946d84672d923235e1859c48fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:38.836328  402781 start.go:360] acquireMachinesLock for force-systemd-env-926084: {Name:mk098acfda26f2145f87464d3ecf0ec8fc8b43f6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 10:13:41.098340  402024 main.go:141] libmachine: SSH cmd err, output: <nil>: gvisor-130661
	
	I1025 10:13:41.101343  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:41.101732  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:41.101746  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:41.101908  402024 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:41.102097  402024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.156 22 <nil> <nil>}
	I1025 10:13:41.102106  402024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sgvisor-130661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 gvisor-130661/g' /etc/hosts;
				else 
					echo '127.0.1.1 gvisor-130661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:13:41.217139  402024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:41.217165  402024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21767-367343/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-367343/.minikube}
	I1025 10:13:41.217241  402024 buildroot.go:174] setting up certificates
	I1025 10:13:41.217256  402024 provision.go:84] configureAuth start
	I1025 10:13:41.220112  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:41.220602  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:41.220620  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:41.223074  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:41.223544  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:41.223567  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:41.223760  402024 provision.go:143] copyHostCerts
	I1025 10:13:41.223814  402024 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-367343/.minikube/ca.pem, removing ...
	I1025 10:13:41.223829  402024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-367343/.minikube/ca.pem
	I1025 10:13:41.223900  402024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-367343/.minikube/ca.pem (1078 bytes)
	I1025 10:13:41.224024  402024 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-367343/.minikube/cert.pem, removing ...
	I1025 10:13:41.224027  402024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-367343/.minikube/cert.pem
	I1025 10:13:41.224054  402024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-367343/.minikube/cert.pem (1123 bytes)
	I1025 10:13:41.224105  402024 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-367343/.minikube/key.pem, removing ...
	I1025 10:13:41.224108  402024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-367343/.minikube/key.pem
	I1025 10:13:41.224128  402024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-367343/.minikube/key.pem (1675 bytes)
	I1025 10:13:41.224183  402024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-367343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca-key.pem org=jenkins.gvisor-130661 san=[127.0.0.1 192.168.61.156 gvisor-130661 localhost minikube]
	I1025 10:13:41.887533  402024 provision.go:177] copyRemoteCerts
	I1025 10:13:41.887587  402024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:13:41.890180  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:41.890713  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:41.890736  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:41.890887  402024 sshutil.go:53] new ssh client: &{IP:192.168.61.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/gvisor-130661/id_rsa Username:docker}
	I1025 10:13:41.973947  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:13:42.001976  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:13:42.029319  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:13:42.056814  402024 provision.go:87] duration metric: took 839.542777ms to configureAuth
	I1025 10:13:42.056832  402024 buildroot.go:189] setting minikube options for container-runtime
	I1025 10:13:42.057002  402024 config.go:182] Loaded profile config "gvisor-130661": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1025 10:13:42.057008  402024 machine.go:96] duration metric: took 1.190235781s to provisionDockerMachine
	I1025 10:13:42.057013  402024 client.go:171] duration metric: took 21.381589969s to LocalClient.Create
	I1025 10:13:42.057032  402024 start.go:167] duration metric: took 21.381653284s to libmachine.API.Create "gvisor-130661"
	I1025 10:13:42.057037  402024 start.go:293] postStartSetup for "gvisor-130661" (driver="kvm2")
	I1025 10:13:42.057043  402024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:13:42.057083  402024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:13:42.060063  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.060492  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:42.060509  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.060676  402024 sshutil.go:53] new ssh client: &{IP:192.168.61.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/gvisor-130661/id_rsa Username:docker}
	I1025 10:13:42.144485  402024 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:13:42.149276  402024 info.go:137] Remote host: Buildroot 2025.02
	I1025 10:13:42.149300  402024 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-367343/.minikube/addons for local assets ...
	I1025 10:13:42.149382  402024 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-367343/.minikube/files for local assets ...
	I1025 10:13:42.149485  402024 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-367343/.minikube/files/etc/ssl/certs/3713312.pem -> 3713312.pem in /etc/ssl/certs
	I1025 10:13:42.149574  402024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:13:42.161464  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/files/etc/ssl/certs/3713312.pem --> /etc/ssl/certs/3713312.pem (1708 bytes)
	I1025 10:13:42.189670  402024 start.go:296] duration metric: took 132.616577ms for postStartSetup
	I1025 10:13:42.192675  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.193005  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:42.193022  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.193236  402024 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/config.json ...
	I1025 10:13:42.193416  402024 start.go:128] duration metric: took 21.520313371s to createHost
	I1025 10:13:42.195405  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.195725  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:42.195740  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.195873  402024 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:42.196051  402024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.156 22 <nil> <nil>}
	I1025 10:13:42.196055  402024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 10:13:42.301551  402024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761387222.250314119
	
	I1025 10:13:42.301563  402024 fix.go:216] guest clock: 1761387222.250314119
	I1025 10:13:42.301569  402024 fix.go:229] Guest: 2025-10-25 10:13:42.250314119 +0000 UTC Remote: 2025-10-25 10:13:42.193421719 +0000 UTC m=+61.210134028 (delta=56.8924ms)
	I1025 10:13:42.301584  402024 fix.go:200] guest clock delta is within tolerance: 56.8924ms
	I1025 10:13:42.301588  402024 start.go:83] releasing machines lock for "gvisor-130661", held for 21.628620842s
	I1025 10:13:42.304923  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.305359  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:42.305384  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.305986  402024 ssh_runner.go:195] Run: cat /version.json
	I1025 10:13:42.306053  402024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:13:42.309622  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.309956  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:42.309971  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.310041  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.310108  402024 sshutil.go:53] new ssh client: &{IP:192.168.61.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/gvisor-130661/id_rsa Username:docker}
	I1025 10:13:42.310605  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:42.310629  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:42.310792  402024 sshutil.go:53] new ssh client: &{IP:192.168.61.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/gvisor-130661/id_rsa Username:docker}
	I1025 10:13:42.387689  402024 ssh_runner.go:195] Run: systemctl --version
	I1025 10:13:42.425313  402024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:13:42.433374  402024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:13:42.433434  402024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:13:42.452655  402024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 10:13:42.452669  402024 start.go:495] detecting cgroup driver to use...
	I1025 10:13:42.452733  402024 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 10:13:42.487228  402024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 10:13:42.502788  402024 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:13:42.502853  402024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:13:42.520355  402024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:13:42.536335  402024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:13:42.692740  402024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:13:42.905480  402024 docker.go:234] disabling docker service ...
	I1025 10:13:42.905531  402024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:13:42.921624  402024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:13:42.938523  402024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:13:43.106710  402024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:13:43.249487  402024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:13:43.265281  402024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:13:43.288241  402024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1025 10:13:43.301628  402024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 10:13:43.314423  402024 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 10:13:43.314485  402024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 10:13:43.327024  402024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 10:13:43.339931  402024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 10:13:43.352298  402024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 10:13:43.365360  402024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:13:43.379276  402024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 10:13:43.391249  402024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1025 10:13:43.404444  402024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1025 10:13:43.417246  402024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:13:43.428309  402024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 10:13:43.428358  402024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 10:13:43.450896  402024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:13:43.465626  402024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:43.632417  402024 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 10:13:43.692011  402024 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1025 10:13:43.692080  402024 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1025 10:13:43.698630  402024 retry.go:31] will retry after 1.326742842s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1025 10:13:45.025580  402024 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1025 10:13:45.033857  402024 start.go:563] Will wait 60s for crictl version
	I1025 10:13:45.033929  402024 ssh_runner.go:195] Run: which crictl
	I1025 10:13:45.038114  402024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 10:13:45.087321  402024 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1025 10:13:45.087400  402024 ssh_runner.go:195] Run: containerd --version
	I1025 10:13:45.127456  402024 ssh_runner.go:195] Run: containerd --version
	I1025 10:13:45.167156  402024 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1025 10:13:45.168419  402024 out.go:179]   - opt containerd=/var/run/containerd/containerd.sock
	I1025 10:13:45.172772  402024 main.go:141] libmachine: domain gvisor-130661 has defined MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:45.173249  402024 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:b3:f3", ip: ""} in network mk-gvisor-130661: {Iface:virbr3 ExpiryTime:2025-10-25 11:13:36 +0000 UTC Type:0 Mac:52:54:00:b8:b3:f3 Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:gvisor-130661 Clientid:01:52:54:00:b8:b3:f3}
	I1025 10:13:45.173273  402024 main.go:141] libmachine: domain gvisor-130661 has defined IP address 192.168.61.156 and MAC address 52:54:00:b8:b3:f3 in network mk-gvisor-130661
	I1025 10:13:45.173511  402024 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:45.178526  402024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:45.194535  402024 kubeadm.go:883] updating cluster {Name:gvisor-130661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[containerd=/var/run/containerd/containerd.sock] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort
:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:gvisor-130661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.156 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:13:45.194672  402024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1025 10:13:45.194740  402024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:45.232800  402024 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1025 10:13:45.232871  402024 ssh_runner.go:195] Run: which lz4
	I1025 10:13:45.237317  402024 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 10:13:45.242404  402024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 10:13:45.242434  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (409015552 bytes)
	I1025 10:13:43.645469  402613 main.go:141] libmachine: waiting for domain to start...
	I1025 10:13:43.647440  402613 main.go:141] libmachine: domain is now running
	I1025 10:13:43.647467  402613 main.go:141] libmachine: waiting for IP...
	I1025 10:13:43.648483  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:43.649179  402613 main.go:141] libmachine: domain NoKubernetes-586342 has current primary IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:43.649215  402613 main.go:141] libmachine: found domain IP: 192.168.50.187
	I1025 10:13:43.649223  402613 main.go:141] libmachine: reserving static IP address...
	I1025 10:13:43.649665  402613 main.go:141] libmachine: unable to find host DHCP lease matching {name: "NoKubernetes-586342", mac: "52:54:00:3f:44:a0", ip: "192.168.50.187"} in network mk-NoKubernetes-586342
	I1025 10:13:43.904807  402613 main.go:141] libmachine: failed reserving static IP address 192.168.50.187 for domain NoKubernetes-586342, will continue anyway: virError(Code=55, Domain=19, Message='Requested operation is not valid: there is an existing dhcp host entry in network 'mk-NoKubernetes-586342' that matches "<host mac='52:54:00:3f:44:a0' name='NoKubernetes-586342' ip='192.168.50.187'/>"')
	I1025 10:13:43.904818  402613 main.go:141] libmachine: waiting for SSH...
	I1025 10:13:43.904832  402613 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 10:13:43.907873  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:43.908292  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:01 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:43.908314  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:43.908483  402613 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:43.908714  402613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I1025 10:13:43.908721  402613 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 10:13:46.977464  402613 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.187:22: connect: no route to host
	I1025 10:13:46.763687  402024 containerd.go:563] duration metric: took 1.526405138s to copy over tarball
	I1025 10:13:46.763754  402024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 10:13:48.351680  402024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.587885129s)
	I1025 10:13:48.351708  402024 containerd.go:570] duration metric: took 1.587998291s to extract the tarball
	I1025 10:13:48.351718  402024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 10:13:48.400492  402024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:48.548711  402024 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 10:13:48.590094  402024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:48.624715  402024 retry.go:31] will retry after 359.683603ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:13:48Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1025 10:13:48.985328  402024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:49.031849  402024 containerd.go:627] all images are preloaded for containerd runtime.
	I1025 10:13:49.031862  402024 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:13:49.031869  402024 kubeadm.go:934] updating node { 192.168.61.156 8443 v1.34.1 containerd true true} ...
	I1025 10:13:49.031953  402024 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=gvisor-130661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:gvisor-130661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:49.032004  402024 ssh_runner.go:195] Run: sudo crictl info
	I1025 10:13:49.067694  402024 cni.go:84] Creating CNI manager for ""
	I1025 10:13:49.067718  402024 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1025 10:13:49.067746  402024 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:13:49.067777  402024 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.156 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:gvisor-130661 NodeName:gvisor-130661 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:13:49.067931  402024 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "gvisor-130661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.156"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.156"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:13:49.067993  402024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:13:49.079760  402024 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:49.079828  402024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:13:49.091563  402024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1025 10:13:49.115531  402024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:49.140257  402024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1025 10:13:49.165390  402024 ssh_runner.go:195] Run: grep 192.168.61.156	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:49.170710  402024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:49.189381  402024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:49.332805  402024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:49.353686  402024 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661 for IP: 192.168.61.156
	I1025 10:13:49.353709  402024 certs.go:195] generating shared ca certs ...
	I1025 10:13:49.353730  402024 certs.go:227] acquiring lock for ca certs: {Name:mk95947bc4fdffa4fda6bcfa90d00796a47f868e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:49.353941  402024 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-367343/.minikube/ca.key
	I1025 10:13:49.354005  402024 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-367343/.minikube/proxy-client-ca.key
	I1025 10:13:49.354015  402024 certs.go:257] generating profile certs ...
	I1025 10:13:49.354095  402024 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.key
	I1025 10:13:49.354110  402024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt with IP's: []
	I1025 10:13:49.444003  402024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt ...
	I1025 10:13:49.444024  402024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: {Name:mk0195dc14b8eee2b627a9773f7692b91c79fa7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:49.444233  402024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.key ...
	I1025 10:13:49.444247  402024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.key: {Name:mkec249db95737128842a3ff3c4d8da50f79e4a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:49.444388  402024 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.key.3473de02
	I1025 10:13:49.444401  402024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.crt.3473de02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.156]
	I1025 10:13:49.625163  402024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.crt.3473de02 ...
	I1025 10:13:49.625183  402024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.crt.3473de02: {Name:mk2889687e701cb7f30b3c086dc17011822e856a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:49.625393  402024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.key.3473de02 ...
	I1025 10:13:49.625415  402024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.key.3473de02: {Name:mka2f7e82eec7d632955ff67f415cdc73121f1b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:49.625520  402024 certs.go:382] copying /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.crt.3473de02 -> /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.crt
	I1025 10:13:49.625590  402024 certs.go:386] copying /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.key.3473de02 -> /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.key
	I1025 10:13:49.625642  402024 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/proxy-client.key
	I1025 10:13:49.625652  402024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/proxy-client.crt with IP's: []
	I1025 10:13:49.859897  402024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/proxy-client.crt ...
	I1025 10:13:49.859915  402024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/proxy-client.crt: {Name:mk71ff5a0a971245f57bbf54346bad2c46f37e89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:49.860130  402024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/proxy-client.key ...
	I1025 10:13:49.860145  402024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/proxy-client.key: {Name:mk6ed67fb5ca4f59456afa07b87e47572570000b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:49.860407  402024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/371331.pem (1338 bytes)
	W1025 10:13:49.860440  402024 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-367343/.minikube/certs/371331_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:49.860448  402024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:13:49.860470  402024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:49.860488  402024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:49.860504  402024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/key.pem (1675 bytes)
	I1025 10:13:49.860537  402024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-367343/.minikube/files/etc/ssl/certs/3713312.pem (1708 bytes)
	I1025 10:13:49.861166  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:49.892441  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:13:49.922511  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:49.955282  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:13:49.986391  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 10:13:50.015697  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:13:50.047803  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:50.080498  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:13:50.114961  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:50.143853  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/certs/371331.pem --> /usr/share/ca-certificates/371331.pem (1338 bytes)
	I1025 10:13:50.173122  402024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/files/etc/ssl/certs/3713312.pem --> /usr/share/ca-certificates/3713312.pem (1708 bytes)
	I1025 10:13:50.202660  402024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:13:50.222606  402024 ssh_runner.go:195] Run: openssl version
	I1025 10:13:50.228857  402024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/371331.pem && ln -fs /usr/share/ca-certificates/371331.pem /etc/ssl/certs/371331.pem"
	I1025 10:13:50.244402  402024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/371331.pem
	I1025 10:13:50.249571  402024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:23 /usr/share/ca-certificates/371331.pem
	I1025 10:13:50.249629  402024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/371331.pem
	I1025 10:13:50.257246  402024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/371331.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:50.270158  402024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3713312.pem && ln -fs /usr/share/ca-certificates/3713312.pem /etc/ssl/certs/3713312.pem"
	I1025 10:13:50.283087  402024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3713312.pem
	I1025 10:13:50.288172  402024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:23 /usr/share/ca-certificates/3713312.pem
	I1025 10:13:50.288246  402024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3713312.pem
	I1025 10:13:50.295366  402024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3713312.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:50.308440  402024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:50.321329  402024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:50.326368  402024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:50.326415  402024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:50.333591  402024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:50.345934  402024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:50.350377  402024 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:13:50.350439  402024 kubeadm.go:400] StartCluster: {Name:gvisor-130661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[containerd=/var/run/containerd/containerd.sock] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:gvisor-130661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.156 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:50.350533  402024 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1025 10:13:50.350614  402024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:13:50.388046  402024 cri.go:89] found id: ""
	I1025 10:13:50.388112  402024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:13:50.400556  402024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:13:50.412933  402024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:13:50.424626  402024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:13:50.424637  402024 kubeadm.go:157] found existing configuration files:
	
	I1025 10:13:50.424688  402024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:13:50.435316  402024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:13:50.435362  402024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:13:50.446581  402024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:13:50.457410  402024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:13:50.457475  402024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:13:50.469835  402024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:13:50.481472  402024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:13:50.481530  402024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:13:50.493521  402024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:13:50.504476  402024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:13:50.504526  402024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:13:50.515905  402024 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 10:13:50.568667  402024 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:13:50.568713  402024 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:13:50.660822  402024 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:13:50.660981  402024 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:13:50.661123  402024 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:13:50.669268  402024 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:13:50.801923  402024 out.go:252]   - Generating certificates and keys ...
	I1025 10:13:50.802066  402024 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:13:50.802161  402024 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:13:50.810448  402024 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:13:53.057557  402613 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.187:22: connect: no route to host
	I1025 10:13:51.392436  402024 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:13:51.598582  402024 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:13:51.678797  402024 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:13:52.328125  402024 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:13:52.328353  402024 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [gvisor-130661 localhost] and IPs [192.168.61.156 127.0.0.1 ::1]
	I1025 10:13:52.870019  402024 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:13:52.870150  402024 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [gvisor-130661 localhost] and IPs [192.168.61.156 127.0.0.1 ::1]
	I1025 10:13:53.113097  402024 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:13:53.258514  402024 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:13:53.698771  402024 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:13:53.698918  402024 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:13:53.967360  402024 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:13:54.066634  402024 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:13:54.532497  402024 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:13:55.186713  402024 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:13:55.409411  402024 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:13:55.410082  402024 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:13:55.412225  402024 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:13:55.413847  402024 out.go:252]   - Booting up control plane ...
	I1025 10:13:55.413972  402024 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:13:55.414096  402024 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:13:55.414202  402024 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:13:55.444548  402024 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:13:55.444688  402024 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:13:55.452256  402024 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:13:55.452818  402024 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:13:55.452884  402024 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:13:55.620046  402024 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:13:55.620142  402024 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:13:56.170804  402613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:56.174412  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.174805  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:54 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:56.174822  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.175031  402613 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/NoKubernetes-586342/config.json ...
	I1025 10:13:56.175237  402613 machine.go:93] provisionDockerMachine start ...
	I1025 10:13:56.177552  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.177897  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:54 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:56.177923  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.178078  402613 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:56.178294  402613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I1025 10:13:56.178299  402613 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:13:56.286586  402613 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 10:13:56.286613  402613 buildroot.go:166] provisioning hostname "NoKubernetes-586342"
	I1025 10:13:56.289668  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.290115  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:54 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:56.290134  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.290342  402613 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:56.290584  402613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I1025 10:13:56.290598  402613 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-586342 && echo "NoKubernetes-586342" | sudo tee /etc/hostname
	I1025 10:13:56.414135  402613 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-586342
	
	I1025 10:13:56.417481  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.417914  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:54 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:56.417937  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.418145  402613 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:56.418432  402613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I1025 10:13:56.418450  402613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-586342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-586342/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-586342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:13:56.533804  402613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:56.533829  402613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21767-367343/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-367343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-367343/.minikube}
	I1025 10:13:56.533865  402613 buildroot.go:174] setting up certificates
	I1025 10:13:56.533880  402613 provision.go:84] configureAuth start
	I1025 10:13:56.537630  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.538159  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:54 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:56.538205  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.541277  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.541693  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:54 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:56.541720  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.541882  402613 provision.go:143] copyHostCerts
	I1025 10:13:56.541946  402613 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-367343/.minikube/ca.pem, removing ...
	I1025 10:13:56.541966  402613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-367343/.minikube/ca.pem
	I1025 10:13:56.542050  402613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-367343/.minikube/ca.pem (1078 bytes)
	I1025 10:13:56.542215  402613 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-367343/.minikube/cert.pem, removing ...
	I1025 10:13:56.542223  402613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-367343/.minikube/cert.pem
	I1025 10:13:56.542259  402613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-367343/.minikube/cert.pem (1123 bytes)
	I1025 10:13:56.542435  402613 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-367343/.minikube/key.pem, removing ...
	I1025 10:13:56.542445  402613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-367343/.minikube/key.pem
	I1025 10:13:56.542487  402613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-367343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-367343/.minikube/key.pem (1675 bytes)
	I1025 10:13:56.542569  402613 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-367343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-586342 san=[127.0.0.1 192.168.50.187 NoKubernetes-586342 localhost minikube]
	I1025 10:13:56.754451  402613 provision.go:177] copyRemoteCerts
	I1025 10:13:56.754502  402613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:13:56.757138  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.757534  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:54 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:56.757551  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.757683  402613 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/NoKubernetes-586342/id_rsa Username:docker}
	I1025 10:13:56.846007  402613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:13:56.879475  402613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 10:13:56.912864  402613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-367343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:13:56.942872  402613 provision.go:87] duration metric: took 408.977519ms to configureAuth
	I1025 10:13:56.942892  402613 buildroot.go:189] setting minikube options for container-runtime
	I1025 10:13:56.943054  402613 config.go:182] Loaded profile config "NoKubernetes-586342": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1025 10:13:56.946038  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.946473  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:54 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:56.946490  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:56.946731  402613 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:56.946929  402613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I1025 10:13:56.946935  402613 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 10:13:57.059601  402613 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 10:13:57.059629  402613 buildroot.go:70] root file system type: tmpfs
	I1025 10:13:57.059790  402613 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 10:13:57.063490  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:57.064005  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:54 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:57.064025  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:57.064235  402613 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:57.064442  402613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I1025 10:13:57.064492  402613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 10:13:57.211239  402613 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 10:13:57.214217  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:57.214625  402613 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:44:a0", ip: ""} in network mk-NoKubernetes-586342: {Iface:virbr2 ExpiryTime:2025-10-25 11:13:54 +0000 UTC Type:0 Mac:52:54:00:3f:44:a0 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:nokubernetes-586342 Clientid:01:52:54:00:3f:44:a0}
	I1025 10:13:57.214642  402613 main.go:141] libmachine: domain NoKubernetes-586342 has defined IP address 192.168.50.187 and MAC address 52:54:00:3f:44:a0 in network mk-NoKubernetes-586342
	I1025 10:13:57.214824  402613 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:57.215082  402613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I1025 10:13:57.215094  402613 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 10:13:58.458290  402781 start.go:364] duration metric: took 19.621898727s to acquireMachinesLock for "force-systemd-env-926084"
	I1025 10:13:58.458402  402781 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-926084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-926084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 10:13:58.458520  402781 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 10:13:58.460172  402781 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 10:13:58.460443  402781 start.go:159] libmachine.API.Create for "force-systemd-env-926084" (driver="kvm2")
	I1025 10:13:58.460485  402781 client.go:168] LocalClient.Create starting
	I1025 10:13:58.460612  402781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-367343/.minikube/certs/ca.pem
	I1025 10:13:58.460660  402781 main.go:141] libmachine: Decoding PEM data...
	I1025 10:13:58.460683  402781 main.go:141] libmachine: Parsing certificate...
	I1025 10:13:58.460762  402781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-367343/.minikube/certs/cert.pem
	I1025 10:13:58.460794  402781 main.go:141] libmachine: Decoding PEM data...
	I1025 10:13:58.460818  402781 main.go:141] libmachine: Parsing certificate...
	I1025 10:13:58.461218  402781 main.go:141] libmachine: creating domain...
	I1025 10:13:58.461233  402781 main.go:141] libmachine: creating network...
	I1025 10:13:58.462873  402781 main.go:141] libmachine: found existing default network
	I1025 10:13:58.463333  402781 main.go:141] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 10:13:58.464299  402781 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e2:83:40} reservation:<nil>}
	I1025 10:13:58.464991  402781 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fc:8b:a7} reservation:<nil>}
	I1025 10:13:58.465830  402781 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:18:03:75} reservation:<nil>}
	I1025 10:13:58.466796  402781 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cd1c20}
	I1025 10:13:58.466916  402781 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-force-systemd-env-926084</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 10:13:58.474567  402781 main.go:141] libmachine: creating private network mk-force-systemd-env-926084 192.168.72.0/24...
	I1025 10:13:58.554876  402781 main.go:141] libmachine: private network mk-force-systemd-env-926084 192.168.72.0/24 created
	I1025 10:13:58.555237  402781 main.go:141] libmachine: <network>
	  <name>mk-force-systemd-env-926084</name>
	  <uuid>61ebff94-660d-4f18-a963-017652233973</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:78:d3:d0'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 10:13:58.555271  402781 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21767-367343/.minikube/machines/force-systemd-env-926084 ...
	I1025 10:13:58.555316  402781 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21767-367343/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1025 10:13:58.555329  402781 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 10:13:58.555427  402781 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21767-367343/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21767-367343/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1025 10:13:56.622027  402024 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001924976s
	I1025 10:13:56.624740  402024 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:13:56.624842  402024 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.61.156:8443/livez
	I1025 10:13:56.624918  402024 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:13:56.624984  402024 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:13:58.655047  402024 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.028680916s
	I1025 10:14:00.847657  402024 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.222340268s
	
	
	==> Docker <==
	Oct 25 10:12:59 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:12:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0bc1aef1566377cad1ee615949d2c3c5e0c1576fd27cc433989791d9c497c25/resolv.conf as [nameserver 192.168.122.1]"
	Oct 25 10:12:59 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:12:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9ae6c8f24b1a2560589c12aefebb4984b1178a641b62fa1a6346d136af6dadcd/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 25 10:13:08 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:13:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c6c1e2036d7b7a52243fe3d45036e84b8d9a3dbd775ea99b4fb0737772a0ea1/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 25 10:13:08 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:13:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0b848e278900c635da64c81c6c219cdf9b416361a64f9c0fa086daa0bb6f4cf5/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 25 10:13:08 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:08.446884855Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 10:13:14 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:13:14Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.004039653Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.125274988Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.125679838Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Oct 25 10:13:15 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:13:15Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.148215197Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.148255483Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.157686085Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.157779017Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:13:26 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:26.188796243Z" level=info msg="ignoring event" container=c0d17429ba6bcab342d50a7549f6c959545788d98b43b999ff2ce28bf7d383ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 10:14:00 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:14:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Oct 25 10:14:01 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:14:01Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-sbkvf_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"0bd26c1cde42eebe969ee9605ca403bfb43a1ad267e7876dc24fc444ec79b044\""
	Oct 25 10:14:01 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:01.767265334Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:14:01 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:01.767314610Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:14:01 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:01.778744484Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Oct 25 10:14:01 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:01.779281431Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:14:01 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:01.907561063Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:02 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:02.045560800Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:02 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:02.045709144Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Oct 25 10:14:02 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:14:02Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e09dc34b53e85       6e38f40d628db                                                                                         1 second ago         Running             storage-provisioner       2                   99e1dbd93d8e8       storage-provisioner
	ec8d99f78aa4d       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        48 seconds ago       Running             kubernetes-dashboard      0                   5c6c1e2036d7b       kubernetes-dashboard-8694d4445c-h4pfc
	6052b75f2caa2       56cc512116c8f                                                                                         About a minute ago   Running             busybox                   1                   9ae6c8f24b1a2       busybox
	521d071359ab5       ead0a4a53df89                                                                                         About a minute ago   Running             coredns                   1                   c0bc1aef15663       coredns-5dd5756b68-xqchd
	4a5b72a8270c4       ea1030da44aa1                                                                                         About a minute ago   Running             kube-proxy                1                   cb934799800df       kube-proxy-z9lpj
	c0d17429ba6bc       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   99e1dbd93d8e8       storage-provisioner
	2a1ffa2af22ae       f6f496300a2ae                                                                                         About a minute ago   Running             kube-scheduler            1                   c9d492f0d9863       kube-scheduler-old-k8s-version-019967
	0a566ad2cf486       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      1                   fd896622ed567       etcd-old-k8s-version-019967
	8f78166a52383       bb5e0dde9054c                                                                                         About a minute ago   Running             kube-apiserver            1                   612a9c17f7da9       kube-apiserver-old-k8s-version-019967
	64be38358ca41       4be79c38a4bab                                                                                         About a minute ago   Running             kube-controller-manager   1                   fed99e2c7e9e6       kube-controller-manager-old-k8s-version-019967
	e33aeaed80168       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   0dcc90abc2a5d       busybox
	ef4d7f184a0ae       ead0a4a53df89                                                                                         2 minutes ago        Exited              coredns                   0                   de2d2358784d5       coredns-5dd5756b68-xqchd
	bbbdcbb03f352       ea1030da44aa1                                                                                         2 minutes ago        Exited              kube-proxy                0                   d42487b488045       kube-proxy-z9lpj
	8080679efccf1       73deb9a3f7025                                                                                         2 minutes ago        Exited              etcd                      0                   c5d0d4f648e19       etcd-old-k8s-version-019967
	0231e6e289ad6       bb5e0dde9054c                                                                                         2 minutes ago        Exited              kube-apiserver            0                   9a3e71ca1c5fc       kube-apiserver-old-k8s-version-019967
	07c9f990248e6       f6f496300a2ae                                                                                         2 minutes ago        Exited              kube-scheduler            0                   b9efadf0fdea9       kube-scheduler-old-k8s-version-019967
	17d10789bbf97       4be79c38a4bab                                                                                         2 minutes ago        Exited              kube-controller-manager   0                   4ee66852bf882       kube-controller-manager-old-k8s-version-019967
	
	
	==> coredns [521d071359ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34349 - 6753 "HINFO IN 5970406071330794890.8778929822718352350. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028025267s
	
	
	==> coredns [ef4d7f184a0a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-019967
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-019967
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=old-k8s-version-019967
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_11_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:11:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-019967
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:14:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:14:00 +0000   Sat, 25 Oct 2025 10:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:14:00 +0000   Sat, 25 Oct 2025 10:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:14:00 +0000   Sat, 25 Oct 2025 10:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:14:00 +0000   Sat, 25 Oct 2025 10:14:00 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    old-k8s-version-019967
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 aae1fbb076584924b1620441fbc223c5
	  System UUID:                aae1fbb0-7658-4924-b162-0441fbc223c5
	  Boot ID:                    ce58d0ca-d932-49d6-abfe-58c464eee8a9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 coredns-5dd5756b68-xqchd                          100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m19s
	  kube-system                 etcd-old-k8s-version-019967                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m31s
	  kube-system                 kube-apiserver-old-k8s-version-019967             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-old-k8s-version-019967    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-z9lpj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-old-k8s-version-019967             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 metrics-server-57f55c9bc5-d9tm8                   100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         112s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-gj2s8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-h4pfc             0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 66s                kube-proxy       
	  Normal  Starting                 2m16s              kube-proxy       
	  Normal  Starting                 2m31s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m31s              kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s              kubelet          Node old-k8s-version-019967 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m31s              kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m31s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m28s              kubelet          Node old-k8s-version-019967 status is now: NodeReady
	  Normal  RegisteredNode           2m20s              node-controller  Node old-k8s-version-019967 event: Registered Node old-k8s-version-019967 in Controller
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s (x8 over 73s)  kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     72s (x7 over 73s)  kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    72s (x8 over 73s)  kubelet          Node old-k8s-version-019967 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           56s                node-controller  Node old-k8s-version-019967 event: Registered Node old-k8s-version-019967 in Controller
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s                 kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s                 kubelet          Node old-k8s-version-019967 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s                 kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet          Node old-k8s-version-019967 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[Oct25 10:12] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001619] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000252] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.010453] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103884] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108995] kauditd_printk_skb: 449 callbacks suppressed
	[  +6.083028] kauditd_printk_skb: 174 callbacks suppressed
	[Oct25 10:13] kauditd_printk_skb: 312 callbacks suppressed
	[  +7.073315] kauditd_printk_skb: 75 callbacks suppressed
	[ +11.015199] kauditd_printk_skb: 17 callbacks suppressed
	[Oct25 10:14] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [0a566ad2cf48] <==
	{"level":"warn","ts":"2025-10-25T10:12:55.150626Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.068569ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9883037347427091583 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-019967\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-019967\" value_size:5313 >> failure:<>>","response":"size:5"}
	{"level":"info","ts":"2025-10-25T10:12:55.15105Z","caller":"traceutil/trace.go:171","msg":"trace[1380957270] transaction","detail":"{read_only:false; number_of_response:0; response_revision:462; }","duration":"377.206137ms","start":"2025-10-25T10:12:54.773828Z","end":"2025-10-25T10:12:55.151034Z","steps":["trace[1380957270] 'process raft request'  (duration: 192.070271ms)","trace[1380957270] 'compare'  (duration: 184.006376ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:12:55.151365Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:12:54.773813Z","time spent":"377.441311ms","remote":"127.0.0.1:42592","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-019967\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-019967\" value_size:5313 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T10:12:55.15147Z","caller":"traceutil/trace.go:171","msg":"trace[194837759] linearizableReadLoop","detail":"{readStateIndex:485; appliedIndex:482; }","duration":"336.507185ms","start":"2025-10-25T10:12:54.814949Z","end":"2025-10-25T10:12:55.151456Z","steps":["trace[194837759] 'read index received'  (duration: 150.959684ms)","trace[194837759] 'applied index is now lower than readState.Index'  (duration: 185.546492ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:12:55.152814Z","caller":"traceutil/trace.go:171","msg":"trace[576346825] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"376.252478ms","start":"2025-10-25T10:12:54.776537Z","end":"2025-10-25T10:12:55.152789Z","steps":["trace[576346825] 'process raft request'  (duration: 374.21177ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.152988Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:12:54.776521Z","time spent":"376.363068ms","remote":"127.0.0.1:42522","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":740,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/old-k8s-version-019967.1871b44c7fbee1c8\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-019967.1871b44c7fbee1c8\" value_size:658 lease:659665310572315767 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T10:12:55.153571Z","caller":"traceutil/trace.go:171","msg":"trace[1300241435] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"366.653139ms","start":"2025-10-25T10:12:54.786888Z","end":"2025-10-25T10:12:55.153541Z","steps":["trace[1300241435] 'process raft request'  (duration: 364.271768ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.153788Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:12:54.786873Z","time spent":"366.733883ms","remote":"127.0.0.1:42576","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4762,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/old-k8s-version-019967\" mod_revision:397 > success:<request_put:<key:\"/registry/minions/old-k8s-version-019967\" value_size:4714 >> failure:<request_range:<key:\"/registry/minions/old-k8s-version-019967\" > >"}
	{"level":"warn","ts":"2025-10-25T10:12:55.154014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"339.072942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-xqchd\" ","response":"range_response_count:1 size:4587"}
	{"level":"info","ts":"2025-10-25T10:12:55.154063Z","caller":"traceutil/trace.go:171","msg":"trace[1623528309] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-xqchd; range_end:; response_count:1; response_revision:464; }","duration":"339.119406ms","start":"2025-10-25T10:12:54.814923Z","end":"2025-10-25T10:12:55.154043Z","steps":["trace[1623528309] 'agreement among raft nodes before linearized reading'  (duration: 338.964768ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.154097Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:12:54.814908Z","time spent":"339.179412ms","remote":"127.0.0.1:42592","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4610,"request content":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-xqchd\" "}
	{"level":"warn","ts":"2025-10-25T10:12:55.155293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"283.133793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-10-25T10:12:55.155351Z","caller":"traceutil/trace.go:171","msg":"trace[1332048] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:464; }","duration":"283.200474ms","start":"2025-10-25T10:12:54.872139Z","end":"2025-10-25T10:12:55.155339Z","steps":["trace[1332048] 'agreement among raft nodes before linearized reading'  (duration: 282.922855ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.158303Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.62736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-10-25T10:12:55.160084Z","caller":"traceutil/trace.go:171","msg":"trace[1850943651] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:464; }","duration":"286.281156ms","start":"2025-10-25T10:12:54.87366Z","end":"2025-10-25T10:12:55.159941Z","steps":["trace[1850943651] 'agreement among raft nodes before linearized reading'  (duration: 284.557261ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.160806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.074929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2025-10-25T10:12:55.160986Z","caller":"traceutil/trace.go:171","msg":"trace[744907045] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:464; }","duration":"288.251523ms","start":"2025-10-25T10:12:54.87272Z","end":"2025-10-25T10:12:55.160972Z","steps":["trace[744907045] 'agreement among raft nodes before linearized reading'  (duration: 287.939934ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.161489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.205863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-10-25T10:12:55.16164Z","caller":"traceutil/trace.go:171","msg":"trace[1549787144] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:464; }","duration":"289.339543ms","start":"2025-10-25T10:12:54.872272Z","end":"2025-10-25T10:12:55.161612Z","steps":["trace[1549787144] 'agreement among raft nodes before linearized reading'  (duration: 289.101037ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.162591Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.349413ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2025-10-25T10:12:55.162801Z","caller":"traceutil/trace.go:171","msg":"trace[685567950] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:464; }","duration":"290.565332ms","start":"2025-10-25T10:12:54.872228Z","end":"2025-10-25T10:12:55.162793Z","steps":["trace[685567950] 'agreement among raft nodes before linearized reading'  (duration: 289.643542ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:13:26.519809Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.220694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.226\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-10-25T10:13:26.519895Z","caller":"traceutil/trace.go:171","msg":"trace[888286509] range","detail":"{range_begin:/registry/masterleases/192.168.39.226; range_end:; response_count:1; response_revision:646; }","duration":"343.319485ms","start":"2025-10-25T10:13:26.176561Z","end":"2025-10-25T10:13:26.519881Z","steps":["trace[888286509] 'range keys from in-memory index tree'  (duration: 343.098739ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:13:26.519931Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:13:26.176542Z","time spent":"343.380588ms","remote":"127.0.0.1:42474","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":1,"response size":158,"request content":"key:\"/registry/masterleases/192.168.39.226\" "}
	{"level":"info","ts":"2025-10-25T10:13:26.767179Z","caller":"traceutil/trace.go:171","msg":"trace[959740343] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"169.157892ms","start":"2025-10-25T10:13:26.597775Z","end":"2025-10-25T10:13:26.766933Z","steps":["trace[959740343] 'process raft request'  (duration: 126.584802ms)","trace[959740343] 'compare'  (duration: 41.861478ms)"],"step_count":2}
	
	
	==> etcd [8080679efccf] <==
	{"level":"info","ts":"2025-10-25T10:11:46.05137Z","caller":"traceutil/trace.go:171","msg":"trace[457405828] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"535.891643ms","start":"2025-10-25T10:11:45.515457Z","end":"2025-10-25T10:11:46.051349Z","steps":["trace[457405828] 'process raft request'  (duration: 535.072881ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:11:46.051564Z","caller":"traceutil/trace.go:171","msg":"trace[2004177702] linearizableReadLoop","detail":"{readStateIndex:358; appliedIndex:357; }","duration":"333.43996ms","start":"2025-10-25T10:11:45.717468Z","end":"2025-10-25T10:11:46.050908Z","steps":["trace[2004177702] 'read index received'  (duration: 333.07117ms)","trace[2004177702] 'applied index is now lower than readState.Index'  (duration: 368.161µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:11:46.051669Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"436.763804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T10:11:46.052036Z","caller":"traceutil/trace.go:171","msg":"trace[347910312] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:348; }","duration":"437.140884ms","start":"2025-10-25T10:11:45.614886Z","end":"2025-10-25T10:11:46.052027Z","steps":["trace[347910312] 'agreement among raft nodes before linearized reading'  (duration: 436.743208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:11:46.05207Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:11:45.614873Z","time spent":"437.183014ms","remote":"127.0.0.1:40400","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":28,"request content":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" "}
	{"level":"warn","ts":"2025-10-25T10:11:46.053789Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:11:45.515441Z","time spent":"536.288264ms","remote":"127.0.0.1:40310","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":749,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/configmaps/kube-system/coredns\" mod_revision:228 > success:<request_put:<key:\"/registry/configmaps/kube-system/coredns\" value_size:701 >> failure:<request_range:<key:\"/registry/configmaps/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-10-25T10:11:46.056166Z","caller":"traceutil/trace.go:171","msg":"trace[252938826] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"327.295876ms","start":"2025-10-25T10:11:45.728856Z","end":"2025-10-25T10:11:46.056152Z","steps":["trace[252938826] 'process raft request'  (duration: 326.906563ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:11:46.056704Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:11:45.72884Z","time spent":"327.764278ms","remote":"127.0.0.1:40278","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":732,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-z9lpj.1871b43d8cbcc5d1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-z9lpj.1871b43d8cbcc5d1\" value_size:652 lease:659665310550283856 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T10:11:58.827765Z","caller":"traceutil/trace.go:171","msg":"trace[2113752601] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"126.141255ms","start":"2025-10-25T10:11:58.701604Z","end":"2025-10-25T10:11:58.827745Z","steps":["trace[2113752601] 'process raft request'  (duration: 126.03987ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:11:59.172436Z","caller":"traceutil/trace.go:171","msg":"trace[610270817] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"138.33607ms","start":"2025-10-25T10:11:59.034082Z","end":"2025-10-25T10:11:59.172418Z","steps":["trace[610270817] 'process raft request'  (duration: 138.019221ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:11:59.291857Z","caller":"traceutil/trace.go:171","msg":"trace[801642274] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"113.222445ms","start":"2025-10-25T10:11:59.178592Z","end":"2025-10-25T10:11:59.291814Z","steps":["trace[801642274] 'process raft request'  (duration: 106.674837ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:11:59.302691Z","caller":"traceutil/trace.go:171","msg":"trace[461076853] linearizableReadLoop","detail":"{readStateIndex:423; appliedIndex:421; }","duration":"109.50711ms","start":"2025-10-25T10:11:59.193169Z","end":"2025-10-25T10:11:59.302676Z","steps":["trace[461076853] 'read index received'  (duration: 92.106408ms)","trace[461076853] 'applied index is now lower than readState.Index'  (duration: 17.400045ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:11:59.30305Z","caller":"traceutil/trace.go:171","msg":"trace[700043840] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"120.811549ms","start":"2025-10-25T10:11:59.182152Z","end":"2025-10-25T10:11:59.302963Z","steps":["trace[700043840] 'process raft request'  (duration: 120.429122ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:11:59.303137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.955488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:1312"}
	{"level":"info","ts":"2025-10-25T10:11:59.303181Z","caller":"traceutil/trace.go:171","msg":"trace[1456919852] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:410; }","duration":"110.021954ms","start":"2025-10-25T10:11:59.193147Z","end":"2025-10-25T10:11:59.303169Z","steps":["trace[1456919852] 'agreement among raft nodes before linearized reading'  (duration: 109.924442ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:12:11.494579Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T10:12:11.494686Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"old-k8s-version-019967","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.226:2380"],"advertise-client-urls":["https://192.168.39.226:2379"]}
	{"level":"warn","ts":"2025-10-25T10:12:11.49478Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:12:11.494857Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:12:11.59691Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.226:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:12:11.598006Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.226:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T10:12:11.598369Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9e3e2863ac888927","current-leader-member-id":"9e3e2863ac888927"}
	{"level":"info","ts":"2025-10-25T10:12:11.606509Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.226:2380"}
	{"level":"info","ts":"2025-10-25T10:12:11.606587Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.226:2380"}
	{"level":"info","ts":"2025-10-25T10:12:11.606595Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"old-k8s-version-019967","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.226:2380"],"advertise-client-urls":["https://192.168.39.226:2379"]}
	
	
	==> kernel <==
	 10:14:02 up 1 min,  0 users,  load average: 0.50, 0.25, 0.09
	Linux old-k8s-version-019967 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0231e6e289ad] <==
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:12:12.576386       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:12:12.576398       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:12:12.576493       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8f78166a5238] <==
	W1025 10:12:58.308027       1 aggregator.go:164] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1025 10:12:58.340324       1 aggregator.go:164] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1025 10:12:58.404716       1 aggregator.go:164] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1025 10:13:00.133809       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:13:00.297948       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.100.37"}
	I1025 10:13:00.329740       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.107.162"}
	I1025 10:13:06.892826       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:13:07.289294       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 10:13:26.768642       1 trace.go:236] Trace[1255543489]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.226,type:*v1.Endpoints,resource:apiServerIPInfo (25-Oct-2025 10:13:26.176) (total time: 592ms):
	Trace[1255543489]: ---"initial value restored" 346ms (10:13:26.523)
	Trace[1255543489]: ---"Transaction prepared" 74ms (10:13:26.597)
	Trace[1255543489]: ---"Txn call completed" 171ms (10:13:26.768)
	Trace[1255543489]: [592.565085ms] [592.565085ms] END
	W1025 10:13:59.741172       1 handler_proxy.go:93] no RequestInfo found in the context
	E1025 10:13:59.741296       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1025 10:13:59.741337       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1025 10:13:59.742259       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.110.221.68:443: connect: connection refused
	I1025 10:13:59.742813       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1025 10:13:59.744551       1 handler_proxy.go:93] no RequestInfo found in the context
	E1025 10:13:59.744603       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1025 10:13:59.744611       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [17d10789bbf9] <==
	I1025 10:11:43.803769       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-sbkvf"
	I1025 10:11:43.851560       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xqchd"
	I1025 10:11:43.889765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="492.272259ms"
	I1025 10:11:43.927222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.400907ms"
	I1025 10:11:43.930321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.306µs"
	I1025 10:11:43.974895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.442µs"
	I1025 10:11:46.141553       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1025 10:11:46.184526       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-sbkvf"
	I1025 10:11:46.206281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.034157ms"
	I1025 10:11:46.224721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.268206ms"
	I1025 10:11:46.226268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.791µs"
	I1025 10:11:46.786836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.197µs"
	I1025 10:11:46.826844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.505934ms"
	I1025 10:11:46.899438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.353619ms"
	I1025 10:11:46.899651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="172.188µs"
	I1025 10:11:55.665306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.836µs"
	I1025 10:11:55.932357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.723µs"
	I1025 10:11:55.947071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="680.915µs"
	I1025 10:11:55.952409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.374µs"
	I1025 10:12:10.319022       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I1025 10:12:10.357102       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-d9tm8"
	I1025 10:12:10.383396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="68.644478ms"
	I1025 10:12:10.413810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="29.874769ms"
	I1025 10:12:10.416244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="189.829µs"
	I1025 10:12:10.450418       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="67.147µs"
	
	
	==> kube-controller-manager [64be38358ca4] <==
	I1025 10:13:07.055182       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="314.518µs"
	I1025 10:13:07.081903       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 10:13:07.301468       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1025 10:13:07.307214       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1025 10:13:07.398019       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:13:07.398077       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:13:07.402188       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-gj2s8"
	I1025 10:13:07.410289       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-h4pfc"
	I1025 10:13:07.418267       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="117.758884ms"
	I1025 10:13:07.431374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="126.237665ms"
	I1025 10:13:07.444772       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:13:07.451407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="33.056086ms"
	I1025 10:13:07.451512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.562µs"
	I1025 10:13:07.495021       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.372µs"
	I1025 10:13:07.499523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.990949ms"
	I1025 10:13:07.499784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="182.808µs"
	I1025 10:13:07.500011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.421µs"
	I1025 10:13:10.904504       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="62.085µs"
	I1025 10:13:15.386821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.160382ms"
	I1025 10:13:15.388071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="752.073µs"
	I1025 10:13:15.392380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.747µs"
	E1025 10:13:59.883411       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1025 10:13:59.899905       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1025 10:14:01.641247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="191.217µs"
	I1025 10:14:01.667985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="148.184µs"
	
	
	==> kube-proxy [4a5b72a8270c] <==
	I1025 10:12:56.301550       1 server_others.go:69] "Using iptables proxy"
	I1025 10:12:56.326673       1 node.go:141] Successfully retrieved node IP: 192.168.39.226
	I1025 10:12:56.440253       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1025 10:12:56.440292       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 10:12:56.451953       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:12:56.452817       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:12:56.455248       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:12:56.455420       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:12:56.462282       1 config.go:188] "Starting service config controller"
	I1025 10:12:56.464983       1 config.go:315] "Starting node config controller"
	I1025 10:12:56.467532       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:12:56.462950       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:12:56.479017       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:12:56.479687       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:12:56.579881       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 10:12:56.582722       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:12:56.583016       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [bbbdcbb03f35] <==
	I1025 10:11:45.443207       1 server_others.go:69] "Using iptables proxy"
	I1025 10:11:45.726300       1 node.go:141] Successfully retrieved node IP: 192.168.39.226
	I1025 10:11:45.765796       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1025 10:11:45.765833       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 10:11:45.768618       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:11:45.768675       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:11:45.769138       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:11:45.769162       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:11:45.770150       1 config.go:188] "Starting service config controller"
	I1025 10:11:45.770192       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:11:45.770212       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:11:45.770369       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:11:45.771153       1 config.go:315] "Starting node config controller"
	I1025 10:11:45.771178       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:11:45.870995       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 10:11:45.871033       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:11:45.871358       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [07c9f990248e] <==
	E1025 10:11:27.964091       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 10:11:28.778053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 10:11:28.778095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 10:11:28.824518       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 10:11:28.824644       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:11:28.974489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 10:11:28.975263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 10:11:29.010414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 10:11:29.010459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 10:11:29.083295       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 10:11:29.083361       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 10:11:29.107144       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 10:11:29.107372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 10:11:29.222847       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 10:11:29.222899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 10:11:29.226591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 10:11:29.226641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 10:11:29.243972       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 10:11:29.244478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 10:11:29.310506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 10:11:29.310599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 10:11:29.317819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 10:11:29.317869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1025 10:11:31.428795       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1025 10:12:11.460435       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2a1ffa2af22a] <==
	I1025 10:12:52.558849       1 serving.go:348] Generated self-signed cert in-memory
	W1025 10:12:54.377511       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:12:54.378406       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:12:54.378493       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:12:54.378591       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:12:54.445402       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 10:12:54.447519       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:12:54.458920       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 10:12:54.458912       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:12:54.460782       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 10:12:54.465051       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 10:12:54.565600       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.295943    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c24811c7113ce61cfbc005b86b7ee179-ca-certs\") pod \"kube-controller-manager-old-k8s-version-019967\" (UID: \"c24811c7113ce61cfbc005b86b7ee179\") " pod="kube-system/kube-controller-manager-old-k8s-version-019967"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.296003    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c24811c7113ce61cfbc005b86b7ee179-k8s-certs\") pod \"kube-controller-manager-old-k8s-version-019967\" (UID: \"c24811c7113ce61cfbc005b86b7ee179\") " pod="kube-system/kube-controller-manager-old-k8s-version-019967"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.296082    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c24811c7113ce61cfbc005b86b7ee179-kubeconfig\") pod \"kube-controller-manager-old-k8s-version-019967\" (UID: \"c24811c7113ce61cfbc005b86b7ee179\") " pod="kube-system/kube-controller-manager-old-k8s-version-019967"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.296172    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/514ee6f89b5acd595fb32f3c1fe25f88-kubeconfig\") pod \"kube-scheduler-old-k8s-version-019967\" (UID: \"514ee6f89b5acd595fb32f3c1fe25f88\") " pod="kube-system/kube-scheduler-old-k8s-version-019967"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.423295    4173 apiserver.go:52] "Watching apiserver"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.429951    4173 topology_manager.go:215] "Topology Admit Handler" podUID="a80d6c71-5c2e-4318-843b-23d21bd67161" podNamespace="kube-system" podName="kube-proxy-z9lpj"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.430315    4173 topology_manager.go:215] "Topology Admit Handler" podUID="c83a23f1-f731-41c3-a7d4-b616238c6380" podNamespace="kube-system" podName="coredns-5dd5756b68-xqchd"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.430435    4173 topology_manager.go:215] "Topology Admit Handler" podUID="50f18c3a-8622-4521-9086-b343c1539058" podNamespace="kube-system" podName="storage-provisioner"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.431150    4173 topology_manager.go:215] "Topology Admit Handler" podUID="079a2647-b585-4cd6-9b2b-e23b90a5f34b" podNamespace="default" podName="busybox"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.431322    4173 topology_manager.go:215] "Topology Admit Handler" podUID="33627814-5083-4a1c-972e-4920295cb7f1" podNamespace="kube-system" podName="metrics-server-57f55c9bc5-d9tm8"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.431490    4173 topology_manager.go:215] "Topology Admit Handler" podUID="70e6bc5f-09a2-4dbb-939a-f54bcb67649e" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-gj2s8"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.431618    4173 topology_manager.go:215] "Topology Admit Handler" podUID="ce0ae58a-f2b9-4660-aa10-960f6e791450" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-h4pfc"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.454245    4173 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.499159    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/50f18c3a-8622-4521-9086-b343c1539058-tmp\") pod \"storage-provisioner\" (UID: \"50f18c3a-8622-4521-9086-b343c1539058\") " pod="kube-system/storage-provisioner"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.499778    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a80d6c71-5c2e-4318-843b-23d21bd67161-lib-modules\") pod \"kube-proxy-z9lpj\" (UID: \"a80d6c71-5c2e-4318-843b-23d21bd67161\") " pod="kube-system/kube-proxy-z9lpj"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.499960    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a80d6c71-5c2e-4318-843b-23d21bd67161-xtables-lock\") pod \"kube-proxy-z9lpj\" (UID: \"a80d6c71-5c2e-4318-843b-23d21bd67161\") " pod="kube-system/kube-proxy-z9lpj"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.734038    4173 scope.go:117] "RemoveContainer" containerID="c0d17429ba6bcab342d50a7549f6c959545788d98b43b999ff2ce28bf7d383ab"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: E1025 10:14:01.780948    4173 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: E1025 10:14:01.781038    4173 kuberuntime_image.go:53] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: E1025 10:14:01.781362    4173 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zf2nv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePoli
cy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-d9tm8_kube-system(33627814-5083-4a1c-972e-4920295cb7f1): ErrImagePull: Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: E1025 10:14:01.781426    4173 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-d9tm8" podUID="33627814-5083-4a1c-972e-4920295cb7f1"
	Oct 25 10:14:02 old-k8s-version-019967 kubelet[4173]: E1025 10:14:02.061529    4173 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:02 old-k8s-version-019967 kubelet[4173]: E1025 10:14:02.061574    4173 kuberuntime_image.go:53] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:02 old-k8s-version-019967 kubelet[4173]: E1025 10:14:02.061711    4173 kuberuntime_manager.go:1209] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v9fjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,Termination
GracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-5f989dc9cf-gj2s8_kubernetes-dashboard(70e6bc5f-09a2-4dbb-939a-f54bcb67649e): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Oct 25 10:14:02 old-k8s-version-019967 kubelet[4173]: E1025 10:14:02.061758    4173 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj2s8" podUID="70e6bc5f-09a2-4dbb-939a-f54bcb67649e"
	
	
	==> kubernetes-dashboard [ec8d99f78aa4] <==
	2025/10/25 10:13:15 Using namespace: kubernetes-dashboard
	2025/10/25 10:13:15 Using in-cluster config to connect to apiserver
	2025/10/25 10:13:15 Using secret token for csrf signing
	2025/10/25 10:13:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:13:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:13:15 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 10:13:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:13:15 Generating JWE encryption key
	2025/10/25 10:13:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:13:15 Initializing JWE encryption key from synchronized object
	2025/10/25 10:13:15 Creating in-cluster Sidecar client
	2025/10/25 10:13:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:13:15 Serving insecurely on HTTP port: 9090
	2025/10/25 10:13:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:13:15 Starting overwatch
	
	
	==> storage-provisioner [c0d17429ba6b] <==
	I1025 10:12:56.154042       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:13:26.165277       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e09dc34b53e8] <==
	I1025 10:14:02.093687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:14:02.140095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:14:02.143773       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019967 -n old-k8s-version-019967
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-019967 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-d9tm8 dashboard-metrics-scraper-5f989dc9cf-gj2s8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-019967 describe pod metrics-server-57f55c9bc5-d9tm8 dashboard-metrics-scraper-5f989dc9cf-gj2s8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-019967 describe pod metrics-server-57f55c9bc5-d9tm8 dashboard-metrics-scraper-5f989dc9cf-gj2s8: exit status 1 (86.104233ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-d9tm8" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-gj2s8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-019967 describe pod metrics-server-57f55c9bc5-d9tm8 dashboard-metrics-scraper-5f989dc9cf-gj2s8: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019967 -n old-k8s-version-019967
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-019967 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-019967 logs -n 25: (1.249009985s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-266353 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                        │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo cri-dockerd --version                                                                                                                 │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo systemctl status containerd --all --full --no-pager                                                                                   │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo systemctl cat containerd --no-pager                                                                                                   │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo cat /lib/systemd/system/containerd.service                                                                                            │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo cat /etc/containerd/config.toml                                                                                                       │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo containerd config dump                                                                                                                │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p cilium-266353 sudo crio config                                                                                                                           │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ delete  │ -p cilium-266353                                                                                                                                            │ cilium-266353             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ start   │ -p gvisor-130661 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2                     │ gvisor-130661             │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ ssh     │ -p NoKubernetes-586342 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-586342       │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ image   │ old-k8s-version-019967 image list --format=json                                                                                                             │ old-k8s-version-019967    │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ pause   │ -p old-k8s-version-019967 --alsologtostderr -v=1                                                                                                            │ old-k8s-version-019967    │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ stop    │ -p NoKubernetes-586342                                                                                                                                      │ NoKubernetes-586342       │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p NoKubernetes-586342 --driver=kvm2                                                                                                                        │ NoKubernetes-586342       │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:14 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-595347 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-595347    │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ delete  │ -p running-upgrade-595347                                                                                                                                   │ running-upgrade-595347    │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p force-systemd-env-926084 --memory=3072 --alsologtostderr -v=5 --driver=kvm2                                                                              │ force-systemd-env-926084  │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ unpause │ -p old-k8s-version-019967 --alsologtostderr -v=1                                                                                                            │ old-k8s-version-019967    │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:14 UTC │
	│ ssh     │ -p NoKubernetes-586342 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-586342       │ jenkins │ v1.37.0 │ 25 Oct 25 10:14 UTC │                     │
	│ delete  │ -p NoKubernetes-586342                                                                                                                                      │ NoKubernetes-586342       │ jenkins │ v1.37.0 │ 25 Oct 25 10:14 UTC │ 25 Oct 25 10:14 UTC │
	│ start   │ -p force-systemd-flag-929224 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2                                                             │ force-systemd-flag-929224 │ jenkins │ v1.37.0 │ 25 Oct 25 10:14 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:14:03
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:14:03.122919  403274 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:14:03.123104  403274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:14:03.123116  403274 out.go:374] Setting ErrFile to fd 2...
	I1025 10:14:03.123121  403274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:14:03.123544  403274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 10:14:03.124457  403274 out.go:368] Setting JSON to false
	I1025 10:14:03.126216  403274 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6985,"bootTime":1761380258,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:14:03.126423  403274 start.go:141] virtualization: kvm guest
	I1025 10:14:03.128490  403274 out.go:179] * [force-systemd-flag-929224] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:14:03.131840  403274 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:14:03.131871  403274 notify.go:220] Checking for updates...
	I1025 10:14:03.135365  403274 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:14:03.136648  403274 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 10:14:03.140720  403274 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 10:14:03.142220  403274 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:14:03.143463  403274 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:14:03.145979  403274 config.go:182] Loaded profile config "force-systemd-env-926084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 10:14:03.146140  403274 config.go:182] Loaded profile config "gvisor-130661": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1025 10:14:03.146301  403274 config.go:182] Loaded profile config "old-k8s-version-019967": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1025 10:14:03.146480  403274 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:14:03.198796  403274 out.go:179] * Using the kvm2 driver based on user configuration
	I1025 10:14:03.200154  403274 start.go:305] selected driver: kvm2
	I1025 10:14:03.200220  403274 start.go:925] validating driver "kvm2" against <nil>
	I1025 10:14:03.200250  403274 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:14:03.201513  403274 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:14:03.201913  403274 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 10:14:03.201954  403274 cni.go:84] Creating CNI manager for ""
	I1025 10:14:03.202014  403274 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 10:14:03.202036  403274 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 10:14:03.202133  403274 start.go:349] cluster config:
	{Name:force-systemd-flag-929224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-929224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:14:03.202395  403274 iso.go:125] acquiring lock: {Name:mkaf34b0e79311c874a9b61067611bc0cdebbfac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:14:03.207065  403274 out.go:179] * Starting "force-systemd-flag-929224" primary control-plane node in "force-systemd-flag-929224" cluster
	I1025 10:14:03.128248  402024 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502125605s
	I1025 10:14:03.154022  402024 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:14:03.176123  402024 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:14:03.196863  402024 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:14:03.197170  402024 kubeadm.go:318] [mark-control-plane] Marking the node gvisor-130661 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:14:03.216566  402024 kubeadm.go:318] [bootstrap-token] Using token: ozptpq.qlcgt9enad5fd9k9
	I1025 10:13:58.821201  402781 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21767-367343/.minikube/machines/force-systemd-env-926084/id_rsa...
	I1025 10:13:59.019663  402781 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21767-367343/.minikube/machines/force-systemd-env-926084/force-systemd-env-926084.rawdisk...
	I1025 10:13:59.019707  402781 main.go:141] libmachine: Writing magic tar header
	I1025 10:13:59.019732  402781 main.go:141] libmachine: Writing SSH key tar header
	I1025 10:13:59.019867  402781 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21767-367343/.minikube/machines/force-systemd-env-926084 ...
	I1025 10:13:59.019961  402781 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-367343/.minikube/machines/force-systemd-env-926084
	I1025 10:13:59.020018  402781 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-367343/.minikube/machines/force-systemd-env-926084 (perms=drwx------)
	I1025 10:13:59.020046  402781 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-367343/.minikube/machines
	I1025 10:13:59.020062  402781 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-367343/.minikube/machines (perms=drwxr-xr-x)
	I1025 10:13:59.020082  402781 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 10:13:59.020100  402781 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-367343/.minikube (perms=drwxr-xr-x)
	I1025 10:13:59.020113  402781 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-367343
	I1025 10:13:59.020133  402781 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-367343 (perms=drwxrwxr-x)
	I1025 10:13:59.020150  402781 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1025 10:13:59.020167  402781 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 10:13:59.020179  402781 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1025 10:13:59.020215  402781 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 10:13:59.020231  402781 main.go:141] libmachine: checking permissions on dir: /home
	I1025 10:13:59.020240  402781 main.go:141] libmachine: skipping /home - not owner
	I1025 10:13:59.020246  402781 main.go:141] libmachine: defining domain...
	I1025 10:13:59.021568  402781 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>force-systemd-env-926084</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21767-367343/.minikube/machines/force-systemd-env-926084/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21767-367343/.minikube/machines/force-systemd-env-926084/force-systemd-env-926084.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-force-systemd-env-926084'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1025 10:13:59.029736  402781 main.go:141] libmachine: domain force-systemd-env-926084 has defined MAC address 52:54:00:0a:3e:96 in network default
	I1025 10:13:59.030414  402781 main.go:141] libmachine: domain force-systemd-env-926084 has defined MAC address 52:54:00:90:ff:bf in network mk-force-systemd-env-926084
	I1025 10:13:59.030431  402781 main.go:141] libmachine: starting domain...
	I1025 10:13:59.030436  402781 main.go:141] libmachine: ensuring networks are active...
	I1025 10:13:59.031255  402781 main.go:141] libmachine: Ensuring network default is active
	I1025 10:13:59.031637  402781 main.go:141] libmachine: Ensuring network mk-force-systemd-env-926084 is active
	I1025 10:13:59.032230  402781 main.go:141] libmachine: getting domain XML...
	I1025 10:13:59.033283  402781 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>force-systemd-env-926084</name>
	  <uuid>e2161a44-899f-4f3c-89f5-da0ea1e4717d</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21767-367343/.minikube/machines/force-systemd-env-926084/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21767-367343/.minikube/machines/force-systemd-env-926084/force-systemd-env-926084.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:90:ff:bf'/>
	      <source network='mk-force-systemd-env-926084'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:0a:3e:96'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1025 10:14:00.547298  402781 main.go:141] libmachine: waiting for domain to start...
	I1025 10:14:00.549127  402781 main.go:141] libmachine: domain is now running
	I1025 10:14:00.549148  402781 main.go:141] libmachine: waiting for IP...
	I1025 10:14:00.550376  402781 main.go:141] libmachine: domain force-systemd-env-926084 has defined MAC address 52:54:00:90:ff:bf in network mk-force-systemd-env-926084
	I1025 10:14:00.551045  402781 main.go:141] libmachine: no network interface addresses found for domain force-systemd-env-926084 (source=lease)
	I1025 10:14:00.551064  402781 main.go:141] libmachine: trying to list again with source=arp
	I1025 10:14:00.551451  402781 main.go:141] libmachine: unable to find current IP address of domain force-systemd-env-926084 in network mk-force-systemd-env-926084 (interfaces detected: [])
	I1025 10:14:00.551501  402781 retry.go:31] will retry after 298.912954ms: waiting for domain to come up
	I1025 10:14:00.852595  402781 main.go:141] libmachine: domain force-systemd-env-926084 has defined MAC address 52:54:00:90:ff:bf in network mk-force-systemd-env-926084
	I1025 10:14:00.853579  402781 main.go:141] libmachine: no network interface addresses found for domain force-systemd-env-926084 (source=lease)
	I1025 10:14:00.853604  402781 main.go:141] libmachine: trying to list again with source=arp
	I1025 10:14:00.854398  402781 main.go:141] libmachine: unable to find current IP address of domain force-systemd-env-926084 in network mk-force-systemd-env-926084 (interfaces detected: [])
	I1025 10:14:00.854450  402781 retry.go:31] will retry after 242.119019ms: waiting for domain to come up
	I1025 10:14:01.098504  402781 main.go:141] libmachine: domain force-systemd-env-926084 has defined MAC address 52:54:00:90:ff:bf in network mk-force-systemd-env-926084
	I1025 10:14:01.099451  402781 main.go:141] libmachine: no network interface addresses found for domain force-systemd-env-926084 (source=lease)
	I1025 10:14:01.099475  402781 main.go:141] libmachine: trying to list again with source=arp
	I1025 10:14:01.099858  402781 main.go:141] libmachine: unable to find current IP address of domain force-systemd-env-926084 in network mk-force-systemd-env-926084 (interfaces detected: [])
	I1025 10:14:01.099908  402781 retry.go:31] will retry after 443.321207ms: waiting for domain to come up
	I1025 10:14:01.545457  402781 main.go:141] libmachine: domain force-systemd-env-926084 has defined MAC address 52:54:00:90:ff:bf in network mk-force-systemd-env-926084
	I1025 10:14:01.546405  402781 main.go:141] libmachine: no network interface addresses found for domain force-systemd-env-926084 (source=lease)
	I1025 10:14:01.546428  402781 main.go:141] libmachine: trying to list again with source=arp
	I1025 10:14:01.546867  402781 main.go:141] libmachine: unable to find current IP address of domain force-systemd-env-926084 in network mk-force-systemd-env-926084 (interfaces detected: [])
	I1025 10:14:01.546913  402781 retry.go:31] will retry after 394.535038ms: waiting for domain to come up
	I1025 10:14:01.943868  402781 main.go:141] libmachine: domain force-systemd-env-926084 has defined MAC address 52:54:00:90:ff:bf in network mk-force-systemd-env-926084
	I1025 10:14:01.944776  402781 main.go:141] libmachine: no network interface addresses found for domain force-systemd-env-926084 (source=lease)
	I1025 10:14:01.944800  402781 main.go:141] libmachine: trying to list again with source=arp
	I1025 10:14:01.945305  402781 main.go:141] libmachine: unable to find current IP address of domain force-systemd-env-926084 in network mk-force-systemd-env-926084 (interfaces detected: [])
	I1025 10:14:01.945389  402781 retry.go:31] will retry after 497.595708ms: waiting for domain to come up
	I1025 10:14:02.444483  402781 main.go:141] libmachine: domain force-systemd-env-926084 has defined MAC address 52:54:00:90:ff:bf in network mk-force-systemd-env-926084
	I1025 10:14:02.445174  402781 main.go:141] libmachine: no network interface addresses found for domain force-systemd-env-926084 (source=lease)
	I1025 10:14:02.445202  402781 main.go:141] libmachine: trying to list again with source=arp
	I1025 10:14:02.445667  402781 main.go:141] libmachine: unable to find current IP address of domain force-systemd-env-926084 in network mk-force-systemd-env-926084 (interfaces detected: [])
	I1025 10:14:02.445728  402781 retry.go:31] will retry after 669.232015ms: waiting for domain to come up
	I1025 10:14:03.116794  402781 main.go:141] libmachine: domain force-systemd-env-926084 has defined MAC address 52:54:00:90:ff:bf in network mk-force-systemd-env-926084
	I1025 10:14:03.117934  402781 main.go:141] libmachine: no network interface addresses found for domain force-systemd-env-926084 (source=lease)
	I1025 10:14:03.117965  402781 main.go:141] libmachine: trying to list again with source=arp
	I1025 10:14:03.118655  402781 main.go:141] libmachine: unable to find current IP address of domain force-systemd-env-926084 in network mk-force-systemd-env-926084 (interfaces detected: [])
	I1025 10:14:03.118721  402781 retry.go:31] will retry after 843.793468ms: waiting for domain to come up
	
	
	==> Docker <==
	Oct 25 10:12:59 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:12:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0bc1aef1566377cad1ee615949d2c3c5e0c1576fd27cc433989791d9c497c25/resolv.conf as [nameserver 192.168.122.1]"
	Oct 25 10:12:59 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:12:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9ae6c8f24b1a2560589c12aefebb4984b1178a641b62fa1a6346d136af6dadcd/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 25 10:13:08 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:13:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c6c1e2036d7b7a52243fe3d45036e84b8d9a3dbd775ea99b4fb0737772a0ea1/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 25 10:13:08 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:13:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0b848e278900c635da64c81c6c219cdf9b416361a64f9c0fa086daa0bb6f4cf5/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 25 10:13:08 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:08.446884855Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 10:13:14 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:13:14Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.004039653Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.125274988Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.125679838Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Oct 25 10:13:15 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:13:15Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.148215197Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.148255483Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.157686085Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Oct 25 10:13:15 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:15.157779017Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:13:26 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:13:26.188796243Z" level=info msg="ignoring event" container=c0d17429ba6bcab342d50a7549f6c959545788d98b43b999ff2ce28bf7d383ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 10:14:00 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:14:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Oct 25 10:14:01 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:14:01Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-sbkvf_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"0bd26c1cde42eebe969ee9605ca403bfb43a1ad267e7876dc24fc444ec79b044\""
	Oct 25 10:14:01 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:01.767265334Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:14:01 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:01.767314610Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:14:01 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:01.778744484Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Oct 25 10:14:01 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:01.779281431Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 10:14:01 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:01.907561063Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:02 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:02.045560800Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:02 old-k8s-version-019967 dockerd[1167]: time="2025-10-25T10:14:02.045709144Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Oct 25 10:14:02 old-k8s-version-019967 cri-dockerd[1538]: time="2025-10-25T10:14:02Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e09dc34b53e85       6e38f40d628db                                                                                         3 seconds ago        Running             storage-provisioner       2                   99e1dbd93d8e8       storage-provisioner
	ec8d99f78aa4d       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        50 seconds ago       Running             kubernetes-dashboard      0                   5c6c1e2036d7b       kubernetes-dashboard-8694d4445c-h4pfc
	6052b75f2caa2       56cc512116c8f                                                                                         About a minute ago   Running             busybox                   1                   9ae6c8f24b1a2       busybox
	521d071359ab5       ead0a4a53df89                                                                                         About a minute ago   Running             coredns                   1                   c0bc1aef15663       coredns-5dd5756b68-xqchd
	4a5b72a8270c4       ea1030da44aa1                                                                                         About a minute ago   Running             kube-proxy                1                   cb934799800df       kube-proxy-z9lpj
	c0d17429ba6bc       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   99e1dbd93d8e8       storage-provisioner
	2a1ffa2af22ae       f6f496300a2ae                                                                                         About a minute ago   Running             kube-scheduler            1                   c9d492f0d9863       kube-scheduler-old-k8s-version-019967
	0a566ad2cf486       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      1                   fd896622ed567       etcd-old-k8s-version-019967
	8f78166a52383       bb5e0dde9054c                                                                                         About a minute ago   Running             kube-apiserver            1                   612a9c17f7da9       kube-apiserver-old-k8s-version-019967
	64be38358ca41       4be79c38a4bab                                                                                         About a minute ago   Running             kube-controller-manager   1                   fed99e2c7e9e6       kube-controller-manager-old-k8s-version-019967
	e33aeaed80168       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   0dcc90abc2a5d       busybox
	ef4d7f184a0ae       ead0a4a53df89                                                                                         2 minutes ago        Exited              coredns                   0                   de2d2358784d5       coredns-5dd5756b68-xqchd
	bbbdcbb03f352       ea1030da44aa1                                                                                         2 minutes ago        Exited              kube-proxy                0                   d42487b488045       kube-proxy-z9lpj
	8080679efccf1       73deb9a3f7025                                                                                         2 minutes ago        Exited              etcd                      0                   c5d0d4f648e19       etcd-old-k8s-version-019967
	0231e6e289ad6       bb5e0dde9054c                                                                                         2 minutes ago        Exited              kube-apiserver            0                   9a3e71ca1c5fc       kube-apiserver-old-k8s-version-019967
	07c9f990248e6       f6f496300a2ae                                                                                         2 minutes ago        Exited              kube-scheduler            0                   b9efadf0fdea9       kube-scheduler-old-k8s-version-019967
	17d10789bbf97       4be79c38a4bab                                                                                         2 minutes ago        Exited              kube-controller-manager   0                   4ee66852bf882       kube-controller-manager-old-k8s-version-019967
	
	
	==> coredns [521d071359ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34349 - 6753 "HINFO IN 5970406071330794890.8778929822718352350. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028025267s
	
	
	==> coredns [ef4d7f184a0a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-019967
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-019967
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=old-k8s-version-019967
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_11_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:11:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-019967
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:14:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:14:00 +0000   Sat, 25 Oct 2025 10:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:14:00 +0000   Sat, 25 Oct 2025 10:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:14:00 +0000   Sat, 25 Oct 2025 10:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:14:00 +0000   Sat, 25 Oct 2025 10:14:00 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    old-k8s-version-019967
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 aae1fbb076584924b1620441fbc223c5
	  System UUID:                aae1fbb0-7658-4924-b162-0441fbc223c5
	  Boot ID:                    ce58d0ca-d932-49d6-abfe-58c464eee8a9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 coredns-5dd5756b68-xqchd                          100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m21s
	  kube-system                 etcd-old-k8s-version-019967                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m33s
	  kube-system                 kube-apiserver-old-k8s-version-019967             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-controller-manager-old-k8s-version-019967    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-z9lpj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-old-k8s-version-019967             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 metrics-server-57f55c9bc5-d9tm8                   100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         114s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-gj2s8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-h4pfc             0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kube-proxy       
	  Normal  Starting                 2m18s              kube-proxy       
	  Normal  Starting                 2m33s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m33s              kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m33s              kubelet          Node old-k8s-version-019967 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m33s              kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m33s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m30s              kubelet          Node old-k8s-version-019967 status is now: NodeReady
	  Normal  RegisteredNode           2m22s              node-controller  Node old-k8s-version-019967 event: Registered Node old-k8s-version-019967 in Controller
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 75s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s (x8 over 75s)  kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     74s (x7 over 75s)  kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    74s (x8 over 75s)  kubelet          Node old-k8s-version-019967 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           58s                node-controller  Node old-k8s-version-019967 event: Registered Node old-k8s-version-019967 in Controller
	  Normal  Starting                 4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s                 kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet          Node old-k8s-version-019967 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet          Node old-k8s-version-019967 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4s                 kubelet          Node old-k8s-version-019967 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[Oct25 10:12] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001619] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000252] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.010453] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103884] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108995] kauditd_printk_skb: 449 callbacks suppressed
	[  +6.083028] kauditd_printk_skb: 174 callbacks suppressed
	[Oct25 10:13] kauditd_printk_skb: 312 callbacks suppressed
	[  +7.073315] kauditd_printk_skb: 75 callbacks suppressed
	[ +11.015199] kauditd_printk_skb: 17 callbacks suppressed
	[Oct25 10:14] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [0a566ad2cf48] <==
	{"level":"warn","ts":"2025-10-25T10:12:55.150626Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.068569ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9883037347427091583 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-019967\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-019967\" value_size:5313 >> failure:<>>","response":"size:5"}
	{"level":"info","ts":"2025-10-25T10:12:55.15105Z","caller":"traceutil/trace.go:171","msg":"trace[1380957270] transaction","detail":"{read_only:false; number_of_response:0; response_revision:462; }","duration":"377.206137ms","start":"2025-10-25T10:12:54.773828Z","end":"2025-10-25T10:12:55.151034Z","steps":["trace[1380957270] 'process raft request'  (duration: 192.070271ms)","trace[1380957270] 'compare'  (duration: 184.006376ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:12:55.151365Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:12:54.773813Z","time spent":"377.441311ms","remote":"127.0.0.1:42592","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-019967\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-019967\" value_size:5313 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T10:12:55.15147Z","caller":"traceutil/trace.go:171","msg":"trace[194837759] linearizableReadLoop","detail":"{readStateIndex:485; appliedIndex:482; }","duration":"336.507185ms","start":"2025-10-25T10:12:54.814949Z","end":"2025-10-25T10:12:55.151456Z","steps":["trace[194837759] 'read index received'  (duration: 150.959684ms)","trace[194837759] 'applied index is now lower than readState.Index'  (duration: 185.546492ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:12:55.152814Z","caller":"traceutil/trace.go:171","msg":"trace[576346825] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"376.252478ms","start":"2025-10-25T10:12:54.776537Z","end":"2025-10-25T10:12:55.152789Z","steps":["trace[576346825] 'process raft request'  (duration: 374.21177ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.152988Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:12:54.776521Z","time spent":"376.363068ms","remote":"127.0.0.1:42522","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":740,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/old-k8s-version-019967.1871b44c7fbee1c8\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-019967.1871b44c7fbee1c8\" value_size:658 lease:659665310572315767 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T10:12:55.153571Z","caller":"traceutil/trace.go:171","msg":"trace[1300241435] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"366.653139ms","start":"2025-10-25T10:12:54.786888Z","end":"2025-10-25T10:12:55.153541Z","steps":["trace[1300241435] 'process raft request'  (duration: 364.271768ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.153788Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:12:54.786873Z","time spent":"366.733883ms","remote":"127.0.0.1:42576","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4762,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/old-k8s-version-019967\" mod_revision:397 > success:<request_put:<key:\"/registry/minions/old-k8s-version-019967\" value_size:4714 >> failure:<request_range:<key:\"/registry/minions/old-k8s-version-019967\" > >"}
	{"level":"warn","ts":"2025-10-25T10:12:55.154014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"339.072942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-xqchd\" ","response":"range_response_count:1 size:4587"}
	{"level":"info","ts":"2025-10-25T10:12:55.154063Z","caller":"traceutil/trace.go:171","msg":"trace[1623528309] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-xqchd; range_end:; response_count:1; response_revision:464; }","duration":"339.119406ms","start":"2025-10-25T10:12:54.814923Z","end":"2025-10-25T10:12:55.154043Z","steps":["trace[1623528309] 'agreement among raft nodes before linearized reading'  (duration: 338.964768ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.154097Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:12:54.814908Z","time spent":"339.179412ms","remote":"127.0.0.1:42592","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4610,"request content":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-xqchd\" "}
	{"level":"warn","ts":"2025-10-25T10:12:55.155293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"283.133793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-10-25T10:12:55.155351Z","caller":"traceutil/trace.go:171","msg":"trace[1332048] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:464; }","duration":"283.200474ms","start":"2025-10-25T10:12:54.872139Z","end":"2025-10-25T10:12:55.155339Z","steps":["trace[1332048] 'agreement among raft nodes before linearized reading'  (duration: 282.922855ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.158303Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.62736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-10-25T10:12:55.160084Z","caller":"traceutil/trace.go:171","msg":"trace[1850943651] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:464; }","duration":"286.281156ms","start":"2025-10-25T10:12:54.87366Z","end":"2025-10-25T10:12:55.159941Z","steps":["trace[1850943651] 'agreement among raft nodes before linearized reading'  (duration: 284.557261ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.160806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.074929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2025-10-25T10:12:55.160986Z","caller":"traceutil/trace.go:171","msg":"trace[744907045] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:464; }","duration":"288.251523ms","start":"2025-10-25T10:12:54.87272Z","end":"2025-10-25T10:12:55.160972Z","steps":["trace[744907045] 'agreement among raft nodes before linearized reading'  (duration: 287.939934ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.161489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.205863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-10-25T10:12:55.16164Z","caller":"traceutil/trace.go:171","msg":"trace[1549787144] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:464; }","duration":"289.339543ms","start":"2025-10-25T10:12:54.872272Z","end":"2025-10-25T10:12:55.161612Z","steps":["trace[1549787144] 'agreement among raft nodes before linearized reading'  (duration: 289.101037ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:12:55.162591Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.349413ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2025-10-25T10:12:55.162801Z","caller":"traceutil/trace.go:171","msg":"trace[685567950] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:464; }","duration":"290.565332ms","start":"2025-10-25T10:12:54.872228Z","end":"2025-10-25T10:12:55.162793Z","steps":["trace[685567950] 'agreement among raft nodes before linearized reading'  (duration: 289.643542ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:13:26.519809Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.220694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.226\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-10-25T10:13:26.519895Z","caller":"traceutil/trace.go:171","msg":"trace[888286509] range","detail":"{range_begin:/registry/masterleases/192.168.39.226; range_end:; response_count:1; response_revision:646; }","duration":"343.319485ms","start":"2025-10-25T10:13:26.176561Z","end":"2025-10-25T10:13:26.519881Z","steps":["trace[888286509] 'range keys from in-memory index tree'  (duration: 343.098739ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:13:26.519931Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:13:26.176542Z","time spent":"343.380588ms","remote":"127.0.0.1:42474","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":1,"response size":158,"request content":"key:\"/registry/masterleases/192.168.39.226\" "}
	{"level":"info","ts":"2025-10-25T10:13:26.767179Z","caller":"traceutil/trace.go:171","msg":"trace[959740343] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"169.157892ms","start":"2025-10-25T10:13:26.597775Z","end":"2025-10-25T10:13:26.766933Z","steps":["trace[959740343] 'process raft request'  (duration: 126.584802ms)","trace[959740343] 'compare'  (duration: 41.861478ms)"],"step_count":2}
	
	
	==> etcd [8080679efccf] <==
	{"level":"info","ts":"2025-10-25T10:11:46.05137Z","caller":"traceutil/trace.go:171","msg":"trace[457405828] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"535.891643ms","start":"2025-10-25T10:11:45.515457Z","end":"2025-10-25T10:11:46.051349Z","steps":["trace[457405828] 'process raft request'  (duration: 535.072881ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:11:46.051564Z","caller":"traceutil/trace.go:171","msg":"trace[2004177702] linearizableReadLoop","detail":"{readStateIndex:358; appliedIndex:357; }","duration":"333.43996ms","start":"2025-10-25T10:11:45.717468Z","end":"2025-10-25T10:11:46.050908Z","steps":["trace[2004177702] 'read index received'  (duration: 333.07117ms)","trace[2004177702] 'applied index is now lower than readState.Index'  (duration: 368.161µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:11:46.051669Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"436.763804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T10:11:46.052036Z","caller":"traceutil/trace.go:171","msg":"trace[347910312] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:348; }","duration":"437.140884ms","start":"2025-10-25T10:11:45.614886Z","end":"2025-10-25T10:11:46.052027Z","steps":["trace[347910312] 'agreement among raft nodes before linearized reading'  (duration: 436.743208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:11:46.05207Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:11:45.614873Z","time spent":"437.183014ms","remote":"127.0.0.1:40400","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":28,"request content":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" "}
	{"level":"warn","ts":"2025-10-25T10:11:46.053789Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:11:45.515441Z","time spent":"536.288264ms","remote":"127.0.0.1:40310","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":749,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/configmaps/kube-system/coredns\" mod_revision:228 > success:<request_put:<key:\"/registry/configmaps/kube-system/coredns\" value_size:701 >> failure:<request_range:<key:\"/registry/configmaps/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-10-25T10:11:46.056166Z","caller":"traceutil/trace.go:171","msg":"trace[252938826] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"327.295876ms","start":"2025-10-25T10:11:45.728856Z","end":"2025-10-25T10:11:46.056152Z","steps":["trace[252938826] 'process raft request'  (duration: 326.906563ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:11:46.056704Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:11:45.72884Z","time spent":"327.764278ms","remote":"127.0.0.1:40278","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":732,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-z9lpj.1871b43d8cbcc5d1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-z9lpj.1871b43d8cbcc5d1\" value_size:652 lease:659665310550283856 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T10:11:58.827765Z","caller":"traceutil/trace.go:171","msg":"trace[2113752601] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"126.141255ms","start":"2025-10-25T10:11:58.701604Z","end":"2025-10-25T10:11:58.827745Z","steps":["trace[2113752601] 'process raft request'  (duration: 126.03987ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:11:59.172436Z","caller":"traceutil/trace.go:171","msg":"trace[610270817] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"138.33607ms","start":"2025-10-25T10:11:59.034082Z","end":"2025-10-25T10:11:59.172418Z","steps":["trace[610270817] 'process raft request'  (duration: 138.019221ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:11:59.291857Z","caller":"traceutil/trace.go:171","msg":"trace[801642274] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"113.222445ms","start":"2025-10-25T10:11:59.178592Z","end":"2025-10-25T10:11:59.291814Z","steps":["trace[801642274] 'process raft request'  (duration: 106.674837ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:11:59.302691Z","caller":"traceutil/trace.go:171","msg":"trace[461076853] linearizableReadLoop","detail":"{readStateIndex:423; appliedIndex:421; }","duration":"109.50711ms","start":"2025-10-25T10:11:59.193169Z","end":"2025-10-25T10:11:59.302676Z","steps":["trace[461076853] 'read index received'  (duration: 92.106408ms)","trace[461076853] 'applied index is now lower than readState.Index'  (duration: 17.400045ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:11:59.30305Z","caller":"traceutil/trace.go:171","msg":"trace[700043840] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"120.811549ms","start":"2025-10-25T10:11:59.182152Z","end":"2025-10-25T10:11:59.302963Z","steps":["trace[700043840] 'process raft request'  (duration: 120.429122ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:11:59.303137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.955488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:1312"}
	{"level":"info","ts":"2025-10-25T10:11:59.303181Z","caller":"traceutil/trace.go:171","msg":"trace[1456919852] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:410; }","duration":"110.021954ms","start":"2025-10-25T10:11:59.193147Z","end":"2025-10-25T10:11:59.303169Z","steps":["trace[1456919852] 'agreement among raft nodes before linearized reading'  (duration: 109.924442ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:12:11.494579Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T10:12:11.494686Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"old-k8s-version-019967","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.226:2380"],"advertise-client-urls":["https://192.168.39.226:2379"]}
	{"level":"warn","ts":"2025-10-25T10:12:11.49478Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:12:11.494857Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:12:11.59691Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.226:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:12:11.598006Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.226:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T10:12:11.598369Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9e3e2863ac888927","current-leader-member-id":"9e3e2863ac888927"}
	{"level":"info","ts":"2025-10-25T10:12:11.606509Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.226:2380"}
	{"level":"info","ts":"2025-10-25T10:12:11.606587Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.226:2380"}
	{"level":"info","ts":"2025-10-25T10:12:11.606595Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"old-k8s-version-019967","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.226:2380"],"advertise-client-urls":["https://192.168.39.226:2379"]}
	
	
	==> kernel <==
	 10:14:04 up 1 min,  0 users,  load average: 0.46, 0.25, 0.09
	Linux old-k8s-version-019967 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0231e6e289ad] <==
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:12:12.576386       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:12:12.576398       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:12:12.576493       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8f78166a5238] <==
	W1025 10:12:58.308027       1 aggregator.go:164] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1025 10:12:58.340324       1 aggregator.go:164] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1025 10:12:58.404716       1 aggregator.go:164] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1025 10:13:00.133809       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:13:00.297948       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.100.37"}
	I1025 10:13:00.329740       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.107.162"}
	I1025 10:13:06.892826       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:13:07.289294       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 10:13:26.768642       1 trace.go:236] Trace[1255543489]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.226,type:*v1.Endpoints,resource:apiServerIPInfo (25-Oct-2025 10:13:26.176) (total time: 592ms):
	Trace[1255543489]: ---"initial value restored" 346ms (10:13:26.523)
	Trace[1255543489]: ---"Transaction prepared" 74ms (10:13:26.597)
	Trace[1255543489]: ---"Txn call completed" 171ms (10:13:26.768)
	Trace[1255543489]: [592.565085ms] [592.565085ms] END
	W1025 10:13:59.741172       1 handler_proxy.go:93] no RequestInfo found in the context
	E1025 10:13:59.741296       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1025 10:13:59.741337       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1025 10:13:59.742259       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.110.221.68:443: connect: connection refused
	I1025 10:13:59.742813       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1025 10:13:59.744551       1 handler_proxy.go:93] no RequestInfo found in the context
	E1025 10:13:59.744603       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1025 10:13:59.744611       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [17d10789bbf9] <==
	I1025 10:11:43.803769       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-sbkvf"
	I1025 10:11:43.851560       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xqchd"
	I1025 10:11:43.889765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="492.272259ms"
	I1025 10:11:43.927222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.400907ms"
	I1025 10:11:43.930321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.306µs"
	I1025 10:11:43.974895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.442µs"
	I1025 10:11:46.141553       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1025 10:11:46.184526       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-sbkvf"
	I1025 10:11:46.206281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.034157ms"
	I1025 10:11:46.224721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.268206ms"
	I1025 10:11:46.226268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.791µs"
	I1025 10:11:46.786836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.197µs"
	I1025 10:11:46.826844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.505934ms"
	I1025 10:11:46.899438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.353619ms"
	I1025 10:11:46.899651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="172.188µs"
	I1025 10:11:55.665306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.836µs"
	I1025 10:11:55.932357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.723µs"
	I1025 10:11:55.947071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="680.915µs"
	I1025 10:11:55.952409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.374µs"
	I1025 10:12:10.319022       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I1025 10:12:10.357102       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-d9tm8"
	I1025 10:12:10.383396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="68.644478ms"
	I1025 10:12:10.413810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="29.874769ms"
	I1025 10:12:10.416244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="189.829µs"
	I1025 10:12:10.450418       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="67.147µs"
	
	
	==> kube-controller-manager [64be38358ca4] <==
	I1025 10:13:07.081903       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 10:13:07.301468       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1025 10:13:07.307214       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1025 10:13:07.398019       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:13:07.398077       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:13:07.402188       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-gj2s8"
	I1025 10:13:07.410289       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-h4pfc"
	I1025 10:13:07.418267       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="117.758884ms"
	I1025 10:13:07.431374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="126.237665ms"
	I1025 10:13:07.444772       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:13:07.451407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="33.056086ms"
	I1025 10:13:07.451512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.562µs"
	I1025 10:13:07.495021       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.372µs"
	I1025 10:13:07.499523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.990949ms"
	I1025 10:13:07.499784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="182.808µs"
	I1025 10:13:07.500011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.421µs"
	I1025 10:13:10.904504       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="62.085µs"
	I1025 10:13:15.386821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.160382ms"
	I1025 10:13:15.388071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="752.073µs"
	I1025 10:13:15.392380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.747µs"
	E1025 10:13:59.883411       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1025 10:13:59.899905       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1025 10:14:01.641247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="191.217µs"
	I1025 10:14:01.667985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="148.184µs"
	I1025 10:14:04.840289       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	
	
	==> kube-proxy [4a5b72a8270c] <==
	I1025 10:12:56.301550       1 server_others.go:69] "Using iptables proxy"
	I1025 10:12:56.326673       1 node.go:141] Successfully retrieved node IP: 192.168.39.226
	I1025 10:12:56.440253       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1025 10:12:56.440292       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 10:12:56.451953       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:12:56.452817       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:12:56.455248       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:12:56.455420       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:12:56.462282       1 config.go:188] "Starting service config controller"
	I1025 10:12:56.464983       1 config.go:315] "Starting node config controller"
	I1025 10:12:56.467532       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:12:56.462950       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:12:56.479017       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:12:56.479687       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:12:56.579881       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 10:12:56.582722       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:12:56.583016       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [bbbdcbb03f35] <==
	I1025 10:11:45.443207       1 server_others.go:69] "Using iptables proxy"
	I1025 10:11:45.726300       1 node.go:141] Successfully retrieved node IP: 192.168.39.226
	I1025 10:11:45.765796       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1025 10:11:45.765833       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 10:11:45.768618       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:11:45.768675       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:11:45.769138       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:11:45.769162       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:11:45.770150       1 config.go:188] "Starting service config controller"
	I1025 10:11:45.770192       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:11:45.770212       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:11:45.770369       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:11:45.771153       1 config.go:315] "Starting node config controller"
	I1025 10:11:45.771178       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:11:45.870995       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 10:11:45.871033       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:11:45.871358       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [07c9f990248e] <==
	E1025 10:11:27.964091       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 10:11:28.778053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 10:11:28.778095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 10:11:28.824518       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 10:11:28.824644       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:11:28.974489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 10:11:28.975263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 10:11:29.010414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 10:11:29.010459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 10:11:29.083295       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 10:11:29.083361       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 10:11:29.107144       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 10:11:29.107372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 10:11:29.222847       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 10:11:29.222899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 10:11:29.226591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 10:11:29.226641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 10:11:29.243972       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 10:11:29.244478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 10:11:29.310506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 10:11:29.310599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 10:11:29.317819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 10:11:29.317869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1025 10:11:31.428795       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1025 10:12:11.460435       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2a1ffa2af22a] <==
	I1025 10:12:52.558849       1 serving.go:348] Generated self-signed cert in-memory
	W1025 10:12:54.377511       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:12:54.378406       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:12:54.378493       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:12:54.378591       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:12:54.445402       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 10:12:54.447519       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:12:54.458920       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 10:12:54.458912       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:12:54.460782       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 10:12:54.465051       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 10:12:54.565600       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.295943    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c24811c7113ce61cfbc005b86b7ee179-ca-certs\") pod \"kube-controller-manager-old-k8s-version-019967\" (UID: \"c24811c7113ce61cfbc005b86b7ee179\") " pod="kube-system/kube-controller-manager-old-k8s-version-019967"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.296003    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c24811c7113ce61cfbc005b86b7ee179-k8s-certs\") pod \"kube-controller-manager-old-k8s-version-019967\" (UID: \"c24811c7113ce61cfbc005b86b7ee179\") " pod="kube-system/kube-controller-manager-old-k8s-version-019967"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.296082    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c24811c7113ce61cfbc005b86b7ee179-kubeconfig\") pod \"kube-controller-manager-old-k8s-version-019967\" (UID: \"c24811c7113ce61cfbc005b86b7ee179\") " pod="kube-system/kube-controller-manager-old-k8s-version-019967"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.296172    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/514ee6f89b5acd595fb32f3c1fe25f88-kubeconfig\") pod \"kube-scheduler-old-k8s-version-019967\" (UID: \"514ee6f89b5acd595fb32f3c1fe25f88\") " pod="kube-system/kube-scheduler-old-k8s-version-019967"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.423295    4173 apiserver.go:52] "Watching apiserver"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.429951    4173 topology_manager.go:215] "Topology Admit Handler" podUID="a80d6c71-5c2e-4318-843b-23d21bd67161" podNamespace="kube-system" podName="kube-proxy-z9lpj"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.430315    4173 topology_manager.go:215] "Topology Admit Handler" podUID="c83a23f1-f731-41c3-a7d4-b616238c6380" podNamespace="kube-system" podName="coredns-5dd5756b68-xqchd"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.430435    4173 topology_manager.go:215] "Topology Admit Handler" podUID="50f18c3a-8622-4521-9086-b343c1539058" podNamespace="kube-system" podName="storage-provisioner"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.431150    4173 topology_manager.go:215] "Topology Admit Handler" podUID="079a2647-b585-4cd6-9b2b-e23b90a5f34b" podNamespace="default" podName="busybox"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.431322    4173 topology_manager.go:215] "Topology Admit Handler" podUID="33627814-5083-4a1c-972e-4920295cb7f1" podNamespace="kube-system" podName="metrics-server-57f55c9bc5-d9tm8"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.431490    4173 topology_manager.go:215] "Topology Admit Handler" podUID="70e6bc5f-09a2-4dbb-939a-f54bcb67649e" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-gj2s8"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.431618    4173 topology_manager.go:215] "Topology Admit Handler" podUID="ce0ae58a-f2b9-4660-aa10-960f6e791450" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-h4pfc"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.454245    4173 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.499159    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/50f18c3a-8622-4521-9086-b343c1539058-tmp\") pod \"storage-provisioner\" (UID: \"50f18c3a-8622-4521-9086-b343c1539058\") " pod="kube-system/storage-provisioner"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.499778    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a80d6c71-5c2e-4318-843b-23d21bd67161-lib-modules\") pod \"kube-proxy-z9lpj\" (UID: \"a80d6c71-5c2e-4318-843b-23d21bd67161\") " pod="kube-system/kube-proxy-z9lpj"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.499960    4173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a80d6c71-5c2e-4318-843b-23d21bd67161-xtables-lock\") pod \"kube-proxy-z9lpj\" (UID: \"a80d6c71-5c2e-4318-843b-23d21bd67161\") " pod="kube-system/kube-proxy-z9lpj"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: I1025 10:14:01.734038    4173 scope.go:117] "RemoveContainer" containerID="c0d17429ba6bcab342d50a7549f6c959545788d98b43b999ff2ce28bf7d383ab"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: E1025 10:14:01.780948    4173 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: E1025 10:14:01.781038    4173 kuberuntime_image.go:53] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: E1025 10:14:01.781362    4173 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zf2nv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePoli
cy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-d9tm8_kube-system(33627814-5083-4a1c-972e-4920295cb7f1): ErrImagePull: Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 25 10:14:01 old-k8s-version-019967 kubelet[4173]: E1025 10:14:01.781426    4173 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-d9tm8" podUID="33627814-5083-4a1c-972e-4920295cb7f1"
	Oct 25 10:14:02 old-k8s-version-019967 kubelet[4173]: E1025 10:14:02.061529    4173 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:02 old-k8s-version-019967 kubelet[4173]: E1025 10:14:02.061574    4173 kuberuntime_image.go:53] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Oct 25 10:14:02 old-k8s-version-019967 kubelet[4173]: E1025 10:14:02.061711    4173 kuberuntime_manager.go:1209] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v9fjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,Termination
GracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-5f989dc9cf-gj2s8_kubernetes-dashboard(70e6bc5f-09a2-4dbb-939a-f54bcb67649e): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Oct 25 10:14:02 old-k8s-version-019967 kubelet[4173]: E1025 10:14:02.061758    4173 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj2s8" podUID="70e6bc5f-09a2-4dbb-939a-f54bcb67649e"
	
	
	==> kubernetes-dashboard [ec8d99f78aa4] <==
	2025/10/25 10:13:15 Starting overwatch
	2025/10/25 10:13:15 Using namespace: kubernetes-dashboard
	2025/10/25 10:13:15 Using in-cluster config to connect to apiserver
	2025/10/25 10:13:15 Using secret token for csrf signing
	2025/10/25 10:13:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:13:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:13:15 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 10:13:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:13:15 Generating JWE encryption key
	2025/10/25 10:13:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:13:15 Initializing JWE encryption key from synchronized object
	2025/10/25 10:13:15 Creating in-cluster Sidecar client
	2025/10/25 10:13:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:13:15 Serving insecurely on HTTP port: 9090
	2025/10/25 10:13:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [c0d17429ba6b] <==
	I1025 10:12:56.154042       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:13:26.165277       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e09dc34b53e8] <==
	I1025 10:14:02.093687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:14:02.140095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:14:02.143773       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019967 -n old-k8s-version-019967
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-019967 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-d9tm8 dashboard-metrics-scraper-5f989dc9cf-gj2s8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-019967 describe pod metrics-server-57f55c9bc5-d9tm8 dashboard-metrics-scraper-5f989dc9cf-gj2s8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-019967 describe pod metrics-server-57f55c9bc5-d9tm8 dashboard-metrics-scraper-5f989dc9cf-gj2s8: exit status 1 (76.352355ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-d9tm8" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-gj2s8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-019967 describe pod metrics-server-57f55c9bc5-d9tm8 dashboard-metrics-scraper-5f989dc9cf-gj2s8: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (39.63s)

                                                
                                    

Test pass (305/344)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 21.19
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 10.21
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.17
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.16
21 TestBinaryMirror 0.66
22 TestOffline 108.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 209.39
29 TestAddons/serial/Volcano 44.39
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 8.56
35 TestAddons/parallel/Registry 17.03
36 TestAddons/parallel/RegistryCreds 0.59
37 TestAddons/parallel/Ingress 21.86
38 TestAddons/parallel/InspektorGadget 6.19
39 TestAddons/parallel/MetricsServer 5.87
41 TestAddons/parallel/CSI 54.61
42 TestAddons/parallel/Headlamp 19.86
43 TestAddons/parallel/CloudSpanner 5.49
45 TestAddons/parallel/NvidiaDevicePlugin 6.51
46 TestAddons/parallel/Yakd 11.86
48 TestAddons/StoppedEnableDisable 13.39
49 TestCertOptions 65.18
50 TestCertExpiration 320.95
51 TestDockerFlags 84.61
52 TestForceSystemdFlag 67.05
53 TestForceSystemdEnv 66.83
58 TestErrorSpam/setup 42.74
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.66
61 TestErrorSpam/pause 1.28
62 TestErrorSpam/unpause 1.53
63 TestErrorSpam/stop 15.97
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.42
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 57.28
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.06
75 TestFunctional/serial/CacheCmd/cache/add_local 1.4
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.16
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 55.97
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 0.95
86 TestFunctional/serial/LogsFileCmd 0.98
87 TestFunctional/serial/InvalidService 4.32
89 TestFunctional/parallel/ConfigCmd 0.45
91 TestFunctional/parallel/DryRun 0.27
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.75
97 TestFunctional/parallel/ServiceCmdConnect 9.39
98 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/SSHCmd 0.31
102 TestFunctional/parallel/CpCmd 1.21
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.04
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.19
113 TestFunctional/parallel/License 0.35
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.44
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.18
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.17
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.02
122 TestFunctional/parallel/ImageCommands/Setup 1.75
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
124 TestFunctional/parallel/MountCmd/any-port 7.24
125 TestFunctional/parallel/ProfileCmd/profile_list 0.31
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.8
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.55
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.35
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.53
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
134 TestFunctional/parallel/ServiceCmd/List 0.42
135 TestFunctional/parallel/DockerEnv/bash 0.69
136 TestFunctional/parallel/MountCmd/specific-port 1.46
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.24
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
142 TestFunctional/parallel/ServiceCmd/Format 0.27
143 TestFunctional/parallel/ServiceCmd/URL 0.23
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.35
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
158 TestGvisorAddon 231.62
161 TestMultiControlPlane/serial/StartCluster 217.71
162 TestMultiControlPlane/serial/DeployApp 6.33
163 TestMultiControlPlane/serial/PingHostFromPods 1.39
164 TestMultiControlPlane/serial/AddWorkerNode 50.45
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
167 TestMultiControlPlane/serial/CopyFile 10.87
168 TestMultiControlPlane/serial/StopSecondaryNode 12.33
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
170 TestMultiControlPlane/serial/RestartSecondaryNode 25.53
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 165.62
173 TestMultiControlPlane/serial/DeleteSecondaryNode 7.09
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.49
175 TestMultiControlPlane/serial/StopCluster 42
176 TestMultiControlPlane/serial/RestartCluster 119.25
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
178 TestMultiControlPlane/serial/AddSecondaryNode 117.71
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.69
182 TestImageBuild/serial/Setup 40.06
183 TestImageBuild/serial/NormalBuild 2.01
184 TestImageBuild/serial/BuildWithBuildArg 0.9
185 TestImageBuild/serial/BuildWithDockerIgnore 0.78
186 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.93
190 TestJSONOutput/start/Command 80.46
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.6
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.57
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 13.69
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.25
218 TestMainNoArgs 0.06
219 TestMinikubeProfile 89.03
222 TestMountStart/serial/StartWithMountFirst 22.67
223 TestMountStart/serial/VerifyMountFirst 0.3
224 TestMountStart/serial/StartWithMountSecond 23.95
225 TestMountStart/serial/VerifyMountSecond 0.3
226 TestMountStart/serial/DeleteFirst 0.71
227 TestMountStart/serial/VerifyMountPostDelete 0.3
228 TestMountStart/serial/Stop 1.28
229 TestMountStart/serial/RestartStopped 20.18
230 TestMountStart/serial/VerifyMountPostStop 0.32
233 TestMultiNode/serial/FreshStart2Nodes 111.53
234 TestMultiNode/serial/DeployApp2Nodes 4.84
235 TestMultiNode/serial/PingHostFrom2Pods 0.92
236 TestMultiNode/serial/AddNode 46.6
237 TestMultiNode/serial/MultiNodeLabels 0.06
238 TestMultiNode/serial/ProfileList 0.46
239 TestMultiNode/serial/CopyFile 6.08
240 TestMultiNode/serial/StopNode 2.51
241 TestMultiNode/serial/StartAfterStop 44.78
242 TestMultiNode/serial/RestartKeepsNodes 167.9
243 TestMultiNode/serial/DeleteNode 2.15
244 TestMultiNode/serial/StopMultiNode 28.03
245 TestMultiNode/serial/RestartMultiNode 89.76
246 TestMultiNode/serial/ValidateNameConflict 46.27
251 TestPreload 150.56
253 TestScheduledStopUnix 113.59
254 TestSkaffold 125.12
257 TestRunningBinaryUpgrade 170.04
259 TestKubernetesUpgrade 197.35
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
271 TestStartStop/group/old-k8s-version/serial/FirstStart 69.75
272 TestNoKubernetes/serial/StartWithK8s 89.27
273 TestStartStop/group/old-k8s-version/serial/DeployApp 10.96
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.27
275 TestStartStop/group/old-k8s-version/serial/Stop 14.06
276 TestNoKubernetes/serial/StartWithStopK8s 17.61
277 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
278 TestStartStop/group/old-k8s-version/serial/SecondStart 46.79
279 TestNoKubernetes/serial/Start 34.59
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
292 TestNoKubernetes/serial/ProfileList 15.99
293 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 9.01
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
295 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
297 TestNoKubernetes/serial/Stop 1.58
298 TestNoKubernetes/serial/StartNoArgs 34.05
299 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
300 TestStoppedBinaryUpgrade/Setup 3.37
301 TestStoppedBinaryUpgrade/Upgrade 100.8
303 TestPause/serial/Start 100.74
304 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
306 TestStartStop/group/no-preload/serial/FirstStart 96.93
307 TestPause/serial/SecondStartNoReconfiguration 69.05
309 TestStartStop/group/embed-certs/serial/FirstStart 88.53
310 TestPause/serial/Pause 0.61
311 TestPause/serial/VerifyStatus 0.24
312 TestPause/serial/Unpause 0.61
313 TestPause/serial/PauseAgain 0.77
314 TestPause/serial/DeletePaused 0.89
315 TestPause/serial/VerifyDeletedResources 15.25
316 TestStartStop/group/no-preload/serial/DeployApp 9.34
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.71
319 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
320 TestStartStop/group/no-preload/serial/Stop 14.15
321 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
322 TestStartStop/group/no-preload/serial/SecondStart 51.66
324 TestStartStop/group/newest-cni/serial/FirstStart 72
325 TestStartStop/group/embed-certs/serial/DeployApp 9.32
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.78
327 TestStartStop/group/embed-certs/serial/Stop 12.23
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/embed-certs/serial/SecondStart 48.06
330 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.01
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
332 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
333 TestStartStop/group/no-preload/serial/Pause 2.93
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 15.34
335 TestNetworkPlugins/group/auto/Start 59.71
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
338 TestStartStop/group/newest-cni/serial/Stop 14.34
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
340 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.92
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
343 TestStartStop/group/newest-cni/serial/SecondStart 40.14
344 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.92
347 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
348 TestStartStop/group/embed-certs/serial/Pause 2.72
349 TestNetworkPlugins/group/kindnet/Start 101.25
350 TestNetworkPlugins/group/auto/KubeletFlags 0.2
351 TestNetworkPlugins/group/auto/NetCatPod 11.29
352 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
355 TestStartStop/group/newest-cni/serial/Pause 3.37
356 TestNetworkPlugins/group/flannel/Start 83.65
357 TestNetworkPlugins/group/auto/DNS 0.2
358 TestNetworkPlugins/group/auto/Localhost 0.15
359 TestNetworkPlugins/group/auto/HairPin 0.14
360 TestNetworkPlugins/group/enable-default-cni/Start 107.9
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.87
365 TestNetworkPlugins/group/bridge/Start 109.24
366 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
368 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
369 TestNetworkPlugins/group/flannel/ControllerPod 6.01
370 TestNetworkPlugins/group/kindnet/DNS 0.17
371 TestNetworkPlugins/group/kindnet/Localhost 0.16
372 TestNetworkPlugins/group/kindnet/HairPin 0.14
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
374 TestNetworkPlugins/group/flannel/NetCatPod 14.28
375 TestNetworkPlugins/group/kubenet/Start 94.33
376 TestNetworkPlugins/group/flannel/DNS 0.19
377 TestNetworkPlugins/group/flannel/Localhost 0.16
378 TestNetworkPlugins/group/flannel/HairPin 0.18
379 TestNetworkPlugins/group/custom-flannel/Start 73.82
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.26
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
385 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
386 TestNetworkPlugins/group/bridge/NetCatPod 22.28
387 TestNetworkPlugins/group/calico/Start 95.69
388 TestNetworkPlugins/group/bridge/DNS 0.2
389 TestNetworkPlugins/group/bridge/Localhost 0.18
390 TestNetworkPlugins/group/bridge/HairPin 0.15
391 TestNetworkPlugins/group/false/Start 94.25
392 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
393 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.27
394 TestNetworkPlugins/group/kubenet/KubeletFlags 0.23
395 TestNetworkPlugins/group/kubenet/NetCatPod 11.33
396 TestNetworkPlugins/group/custom-flannel/DNS 0.17
397 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
398 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
399 TestNetworkPlugins/group/kubenet/DNS 0.18
400 TestNetworkPlugins/group/kubenet/Localhost 0.17
401 TestNetworkPlugins/group/kubenet/HairPin 0.23
402 TestNetworkPlugins/group/calico/ControllerPod 6.01
403 TestNetworkPlugins/group/calico/KubeletFlags 0.18
404 TestNetworkPlugins/group/calico/NetCatPod 11.29
405 TestNetworkPlugins/group/calico/DNS 0.16
406 TestNetworkPlugins/group/calico/Localhost 0.13
407 TestNetworkPlugins/group/calico/HairPin 0.14
408 TestNetworkPlugins/group/false/KubeletFlags 0.19
409 TestNetworkPlugins/group/false/NetCatPod 10.25
410 TestNetworkPlugins/group/false/DNS 0.15
411 TestNetworkPlugins/group/false/Localhost 0.12
412 TestNetworkPlugins/group/false/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (21.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-295399 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-295399 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 : (21.187739686s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (21.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1025 09:11:55.103583  371331 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1025 09:11:55.103710  371331 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-295399
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-295399: exit status 85 (74.139406ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-295399 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-295399 │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:11:33
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:11:33.974131  371344 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:11:33.974452  371344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:11:33.974463  371344 out.go:374] Setting ErrFile to fd 2...
	I1025 09:11:33.974470  371344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:11:33.974690  371344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	W1025 09:11:33.974828  371344 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21767-367343/.minikube/config/config.json: open /home/jenkins/minikube-integration/21767-367343/.minikube/config/config.json: no such file or directory
	I1025 09:11:33.975364  371344 out.go:368] Setting JSON to true
	I1025 09:11:33.976380  371344 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3236,"bootTime":1761380258,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:11:33.976475  371344 start.go:141] virtualization: kvm guest
	I1025 09:11:33.978816  371344 out.go:99] [download-only-295399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:11:33.978977  371344 notify.go:220] Checking for updates...
	W1025 09:11:33.978981  371344 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 09:11:33.980251  371344 out.go:171] MINIKUBE_LOCATION=21767
	I1025 09:11:33.981556  371344 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:11:33.982829  371344 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 09:11:33.984227  371344 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 09:11:33.988781  371344 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 09:11:33.990980  371344 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 09:11:33.991315  371344 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:11:34.022873  371344 out.go:99] Using the kvm2 driver based on user configuration
	I1025 09:11:34.022911  371344 start.go:305] selected driver: kvm2
	I1025 09:11:34.022918  371344 start.go:925] validating driver "kvm2" against <nil>
	I1025 09:11:34.023275  371344 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:11:34.023793  371344 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1025 09:11:34.023941  371344 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:11:34.023971  371344 cni.go:84] Creating CNI manager for ""
	I1025 09:11:34.024029  371344 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 09:11:34.024038  371344 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 09:11:34.024092  371344 start.go:349] cluster config:
	{Name:download-only-295399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-295399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:11:34.024286  371344 iso.go:125] acquiring lock: {Name:mkaf34b0e79311c874a9b61067611bc0cdebbfac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:11:34.025688  371344 out.go:99] Downloading VM boot image ...
	I1025 09:11:34.025719  371344 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21767-367343/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1025 09:11:44.242588  371344 out.go:99] Starting "download-only-295399" primary control-plane node in "download-only-295399" cluster
	I1025 09:11:44.242622  371344 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1025 09:11:44.342448  371344 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1025 09:11:44.342492  371344 cache.go:58] Caching tarball of preloaded images
	I1025 09:11:44.342690  371344 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1025 09:11:44.344791  371344 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1025 09:11:44.344822  371344 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1025 09:11:44.443976  371344 preload.go:290] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1025 09:11:44.444133  371344 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-295399 host does not exist
	  To start a cluster, run: "minikube start -p download-only-295399"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-295399
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (10.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-639122 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-639122 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=kvm2 : (10.206791115s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (10.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1025 09:12:05.705894  371331 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1025 09:12:05.706015  371331 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-639122
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-639122: exit status 85 (78.658719ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-295399 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-295399 │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │                     │
	│ delete  │ --all                                                                                                                                           │ minikube             │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ delete  │ -p download-only-295399                                                                                                                         │ download-only-295399 │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -o=json --download-only -p download-only-639122 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=kvm2 │ download-only-639122 │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:11:55
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:11:55.552207  371586 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:11:55.552321  371586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:11:55.552327  371586 out.go:374] Setting ErrFile to fd 2...
	I1025 09:11:55.552331  371586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:11:55.552500  371586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 09:11:55.552968  371586 out.go:368] Setting JSON to true
	I1025 09:11:55.553850  371586 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3258,"bootTime":1761380258,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:11:55.553912  371586 start.go:141] virtualization: kvm guest
	I1025 09:11:55.555690  371586 out.go:99] [download-only-639122] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:11:55.555825  371586 notify.go:220] Checking for updates...
	I1025 09:11:55.556898  371586 out.go:171] MINIKUBE_LOCATION=21767
	I1025 09:11:55.558033  371586 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:11:55.559149  371586 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 09:11:55.560310  371586 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 09:11:55.561495  371586 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 09:11:55.563780  371586 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 09:11:55.564075  371586 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:11:55.593854  371586 out.go:99] Using the kvm2 driver based on user configuration
	I1025 09:11:55.593895  371586 start.go:305] selected driver: kvm2
	I1025 09:11:55.593910  371586 start.go:925] validating driver "kvm2" against <nil>
	I1025 09:11:55.594266  371586 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:11:55.594767  371586 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1025 09:11:55.594923  371586 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:11:55.594957  371586 cni.go:84] Creating CNI manager for ""
	I1025 09:11:55.595025  371586 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 09:11:55.595035  371586 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 09:11:55.595086  371586 start.go:349] cluster config:
	{Name:download-only-639122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-639122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:11:55.595200  371586 iso.go:125] acquiring lock: {Name:mkaf34b0e79311c874a9b61067611bc0cdebbfac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:11:55.596700  371586 out.go:99] Starting "download-only-639122" primary control-plane node in "download-only-639122" cluster
	I1025 09:11:55.596715  371586 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 09:11:56.054641  371586 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1025 09:11:56.054685  371586 cache.go:58] Caching tarball of preloaded images
	I1025 09:11:56.054895  371586 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 09:11:56.056782  371586 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1025 09:11:56.056821  371586 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1025 09:11:56.525492  371586 preload.go:290] Got checksum from GCS API "d7f0ccd752ff15c628c6fc8ef8c8033e"
	I1025 09:11:56.525545  371586 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4?checksum=md5:d7f0ccd752ff15c628c6fc8ef8c8033e -> /home/jenkins/minikube-integration/21767-367343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1025 09:12:04.751228  371586 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1025 09:12:04.751818  371586 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/download-only-639122/config.json ...
	I1025 09:12:04.751871  371586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/download-only-639122/config.json: {Name:mk5266a32f9b8ae65683963e0d02783fad61f594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:04.752098  371586 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 09:12:04.752317  371586 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21767-367343/.minikube/cache/linux/amd64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-639122 host does not exist
	  To start a cluster, run: "minikube start -p download-only-639122"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-639122
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 09:12:06.423886  371331 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-136948 --alsologtostderr --binary-mirror http://127.0.0.1:37505 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-136948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-136948
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (108.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-563584 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-563584 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 : (1m47.38946184s)
helpers_test.go:175: Cleaning up "offline-docker-563584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-563584
--- PASS: TestOffline (108.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-442185
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-442185: exit status 85 (70.15147ms)

                                                
                                                
-- stdout --
	* Profile "addons-442185" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-442185"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-442185
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-442185: exit status 85 (70.147773ms)

                                                
                                                
-- stdout --
	* Profile "addons-442185" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-442185"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (209.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-442185 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-442185 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m29.394026221s)
--- PASS: TestAddons/Setup (209.39s)

                                                
                                    
x
+
TestAddons/serial/Volcano (44.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 23.078299ms
addons_test.go:868: volcano-scheduler stabilized in 23.133802ms
addons_test.go:876: volcano-admission stabilized in 23.171174ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-c5g98" [95bee6e7-6994-4673-96cc-e2a48d3d5074] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003873573s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-mtff9" [609baad5-f78e-4230-b377-c3c851c5ddf2] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.006067257s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-k62ql" [92cf0704-c9f8-4b8b-b6f3-135689419b7f] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0040375s
addons_test.go:903: (dbg) Run:  kubectl --context addons-442185 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-442185 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-442185 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [3405be58-76cd-4ceb-8752-0fd58d96fbc9] Pending
helpers_test.go:352: "test-job-nginx-0" [3405be58-76cd-4ceb-8752-0fd58d96fbc9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [3405be58-76cd-4ceb-8752-0fd58d96fbc9] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.00710559s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-442185 addons disable volcano --alsologtostderr -v=1: (11.926078357s)
--- PASS: TestAddons/serial/Volcano (44.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-442185 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-442185 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-442185 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-442185 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [be415ed3-d298-46d3-8a28-0048952f6dda] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [be415ed3-d298-46d3-8a28-0048952f6dda] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.005062191s
addons_test.go:694: (dbg) Run:  kubectl --context addons-442185 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-442185 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-442185 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 16.716304ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-jmdnx" [113bb8bd-ad11-4695-97d8-f5f7fca0a88f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004873968s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-c2qpq" [ad98b74f-93c3-4aec-9f8d-d9bb38aa1400] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004840072s
addons_test.go:392: (dbg) Run:  kubectl --context addons-442185 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-442185 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-442185 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.331802254s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 ip
2025/10/25 09:16:54 [DEBUG] GET http://192.168.39.30:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.03s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.59s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.136473ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-442185
addons_test.go:332: (dbg) Run:  kubectl --context addons-442185 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.59s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-442185 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-442185 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-442185 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2d470f07-5675-41bf-8bcb-a9a79d3f2b64] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2d470f07-5675-41bf-8bcb-a9a79d3f2b64] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.006012383s
I1025 09:16:56.880942  371331 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-442185 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.30
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-442185 addons disable ingress-dns --alsologtostderr -v=1: (1.393294533s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-442185 addons disable ingress --alsologtostderr -v=1: (7.739759459s)
--- PASS: TestAddons/parallel/Ingress (21.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9xh8v" [bb09a68d-d9af-4c89-ac13-9252a8908a7b] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005454268s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 16.793645ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-bfhsw" [fed32d2b-9d1b-420c-97bb-ab8a81af5ab0] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003973527s
addons_test.go:463: (dbg) Run:  kubectl --context addons-442185 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 09:16:56.205127  371331 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 09:16:56.211722  371331 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 09:16:56.211756  371331 kapi.go:107] duration metric: took 6.653954ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.667912ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-442185 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-442185 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [14a25617-8b0d-4d65-8ff1-c9e29dcd13d6] Pending
helpers_test.go:352: "task-pv-pod" [14a25617-8b0d-4d65-8ff1-c9e29dcd13d6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [14a25617-8b0d-4d65-8ff1-c9e29dcd13d6] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004268167s
addons_test.go:572: (dbg) Run:  kubectl --context addons-442185 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-442185 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-442185 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-442185 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-442185 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-442185 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-442185 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [58e218fc-da6b-4215-99d7-a3a9dd5c5ce7] Pending
helpers_test.go:352: "task-pv-pod-restore" [58e218fc-da6b-4215-99d7-a3a9dd5c5ce7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [58e218fc-da6b-4215-99d7-a3a9dd5c5ce7] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005466022s
addons_test.go:614: (dbg) Run:  kubectl --context addons-442185 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-442185 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-442185 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-442185 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.800250032s)
--- PASS: TestAddons/parallel/CSI (54.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-442185 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-hksm6" [67f931fa-ba16-4567-ba20-e6096d5f5a18] Pending
helpers_test.go:352: "headlamp-6945c6f4d-hksm6" [67f931fa-ba16-4567-ba20-e6096d5f5a18] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-hksm6" [67f931fa-ba16-4567-ba20-e6096d5f5a18] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005118507s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-442185 addons disable headlamp --alsologtostderr -v=1: (6.07698505s)
--- PASS: TestAddons/parallel/Headlamp (19.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-6knln" [5d89202e-25af-4928-8874-6684546fad2b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00429226s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-t9l94" [f10e3f67-7921-4e3b-ab1b-0b86e4475c8d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005350762s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-49jrm" [8266da77-3c6a-4e08-a184-5dfcda57d1cf] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.009847843s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-442185 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-442185 addons disable yakd --alsologtostderr -v=1: (5.848347748s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-442185
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-442185: (13.169832691s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-442185
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-442185
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-442185
--- PASS: TestAddons/StoppedEnableDisable (13.39s)

                                                
                                    
x
+
TestCertOptions (65.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-221974 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-221974 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m3.832260868s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-221974 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-221974 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-221974 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-221974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-221974
--- PASS: TestCertOptions (65.18s)

                                                
                                    
x
+
TestCertExpiration (320.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-318946 --memory=3072 --cert-expiration=3m --driver=kvm2 
E1025 10:14:50.964685  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-318946 --memory=3072 --cert-expiration=3m --driver=kvm2 : (1m9.903578096s)
E1025 10:15:57.166333  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-318946 --memory=3072 --cert-expiration=8760h --driver=kvm2 
E1025 10:19:15.367022  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:15.373514  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:15.385144  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:15.406686  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:15.448181  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:15.530158  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:15.691781  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:16.013421  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:16.655520  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:17.936920  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:20.498801  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-318946 --memory=3072 --cert-expiration=8760h --driver=kvm2 : (1m9.897305964s)
helpers_test.go:175: Cleaning up "cert-expiration-318946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-318946
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-318946: (1.148900146s)
--- PASS: TestCertExpiration (320.95s)

                                                
                                    
x
+
TestDockerFlags (84.61s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-672844 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-672844 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m23.136537456s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-672844 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-672844 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-672844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-672844
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-672844: (1.05966828s)
--- PASS: TestDockerFlags (84.61s)

                                                
                                    
x
+
TestForceSystemdFlag (67.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-929224 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-929224 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m5.73587376s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-929224 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-929224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-929224
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-929224: (1.016561131s)
--- PASS: TestForceSystemdFlag (67.05s)

                                                
                                    
x
+
TestForceSystemdEnv (66.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-926084 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-926084 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (1m5.66575032s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-926084 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-926084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-926084
--- PASS: TestForceSystemdEnv (66.83s)

                                                
                                    
x
+
TestErrorSpam/setup (42.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-784877 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-784877 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-784877 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-784877 --driver=kvm2 : (42.734856375s)
--- PASS: TestErrorSpam/setup (42.74s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 status
--- PASS: TestErrorSpam/status (0.66s)

                                                
                                    
x
+
TestErrorSpam/pause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 pause
--- PASS: TestErrorSpam/pause (1.28s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (15.97s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 stop: (12.747712724s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 stop: (1.240883176s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-784877 --log_dir /tmp/nospam-784877 stop: (1.985617499s)
--- PASS: TestErrorSpam/stop (15.97s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21767-367343/.minikube/files/etc/test/nested/copy/371331/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-447073 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-447073 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m23.423870048s)
--- PASS: TestFunctional/serial/StartWithProxy (83.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (57.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 09:24:41.708648  371331 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-447073 --alsologtostderr -v=8
E1025 09:25:36.551210  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:25:36.557684  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:25:36.569110  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:25:36.590508  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:25:36.632659  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:25:36.715391  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:25:36.877049  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:25:37.198623  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:25:37.840420  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-447073 --alsologtostderr -v=8: (57.281765519s)
functional_test.go:678: soft start took 57.282590207s for "functional-447073" cluster.
I1025 09:25:38.990752  371331 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (57.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-447073 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 cache add registry.k8s.io/pause:3.1
E1025 09:25:39.122265  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-447073 cache add registry.k8s.io/pause:3.1: (1.412871996s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 cache add registry.k8s.io/pause:latest
E1025 09:25:41.684573  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-447073 /tmp/TestFunctionalserialCacheCmdcacheadd_local1537511386/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 cache add minikube-local-cache-test:functional-447073
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-447073 cache add minikube-local-cache-test:functional-447073: (1.048611439s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 cache delete minikube-local-cache-test:functional-447073
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-447073
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-447073 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (188.592499ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 kubectl -- --context functional-447073 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-447073 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (55.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-447073 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 09:25:46.806937  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:25:57.048621  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:26:17.530602  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-447073 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (55.969996181s)
functional_test.go:776: restart took 55.970145192s for "functional-447073" cluster.
I1025 09:26:41.373850  371331 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (55.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-447073 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 logs
--- PASS: TestFunctional/serial/LogsCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 logs --file /tmp/TestFunctionalserialLogsFileCmd424867128/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-447073 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-447073
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-447073: exit status 115 (236.473237ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.191:32220 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-447073 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-447073 config get cpus: exit status 14 (69.825508ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-447073 config get cpus: exit status 14 (67.126346ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-447073 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-447073 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (134.051895ms)

                                                
                                                
-- stdout --
	* [functional-447073] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:26:49.951573  380949 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:26:49.951721  380949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:26:49.951732  380949 out.go:374] Setting ErrFile to fd 2...
	I1025 09:26:49.951739  380949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:26:49.952082  380949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 09:26:49.952746  380949 out.go:368] Setting JSON to false
	I1025 09:26:49.954049  380949 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4152,"bootTime":1761380258,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:26:49.954178  380949 start.go:141] virtualization: kvm guest
	I1025 09:26:49.956435  380949 out.go:179] * [functional-447073] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:26:49.957865  380949 notify.go:220] Checking for updates...
	I1025 09:26:49.957898  380949 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:26:49.959229  380949 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:26:49.960713  380949 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 09:26:49.961891  380949 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 09:26:49.962871  380949 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:26:49.963757  380949 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:26:49.965164  380949 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:26:49.965749  380949 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:26:50.000655  380949 out.go:179] * Using the kvm2 driver based on existing profile
	I1025 09:26:50.001770  380949 start.go:305] selected driver: kvm2
	I1025 09:26:50.001784  380949 start.go:925] validating driver "kvm2" against &{Name:functional-447073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-447073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:26:50.001897  380949 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:26:50.003791  380949 out.go:203] 
	W1025 09:26:50.005029  380949 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 09:26:50.007336  380949 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-447073 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-447073 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-447073 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (136.478268ms)

                                                
                                                
-- stdout --
	* [functional-447073] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:26:50.220293  381001 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:26:50.220471  381001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:26:50.220487  381001 out.go:374] Setting ErrFile to fd 2...
	I1025 09:26:50.220494  381001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:26:50.220916  381001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 09:26:50.221572  381001 out.go:368] Setting JSON to false
	I1025 09:26:50.222863  381001 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4152,"bootTime":1761380258,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:26:50.223001  381001 start.go:141] virtualization: kvm guest
	I1025 09:26:50.224795  381001 out.go:179] * [functional-447073] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:26:50.226216  381001 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:26:50.226244  381001 notify.go:220] Checking for updates...
	I1025 09:26:50.228265  381001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:26:50.229474  381001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	I1025 09:26:50.230673  381001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	I1025 09:26:50.231736  381001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:26:50.232879  381001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:26:50.234299  381001 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:26:50.234772  381001 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:26:50.271376  381001 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1025 09:26:50.272952  381001 start.go:305] selected driver: kvm2
	I1025 09:26:50.272970  381001 start.go:925] validating driver "kvm2" against &{Name:functional-447073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-447073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:26:50.273073  381001 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:26:50.275002  381001 out.go:203] 
	W1025 09:26:50.276433  381001 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:26:50.278578  381001 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-447073 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-447073 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-55ktk" [8c5ae381-45a1-4873-a9f5-d5a489ed90f3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-55ktk" [8c5ae381-45a1-4873-a9f5-d5a489ed90f3] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003675497s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.191:31301
functional_test.go:1680: http://192.168.39.191:31301: success! body:
Request served by hello-node-connect-7d85dfc575-55ktk

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.191:31301
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh -n functional-447073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 cp functional-447073:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3567783094/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh -n functional-447073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh -n functional-447073 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/371331/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "sudo cat /etc/test/nested/copy/371331/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/371331.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "sudo cat /etc/ssl/certs/371331.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/371331.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "sudo cat /usr/share/ca-certificates/371331.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3713312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "sudo cat /etc/ssl/certs/3713312.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3713312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "sudo cat /usr/share/ca-certificates/3713312.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-447073 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-447073 ssh "sudo systemctl is-active crio": exit status 1 (185.515169ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-447073 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-447073 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-lfbb2" [db9513be-961f-495a-b80a-58e6a2621b5d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-lfbb2" [db9513be-961f-495a-b80a-58e6a2621b5d] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.009407859s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-447073 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-447073
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-447073
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-447073 image ls --format short --alsologtostderr:
I1025 09:27:09.043946  381848 out.go:360] Setting OutFile to fd 1 ...
I1025 09:27:09.044265  381848 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:27:09.044275  381848 out.go:374] Setting ErrFile to fd 2...
I1025 09:27:09.044280  381848 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:27:09.044544  381848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
I1025 09:27:09.045153  381848 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:27:09.045294  381848 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:27:09.047553  381848 ssh_runner.go:195] Run: systemctl --version
I1025 09:27:09.050171  381848 main.go:141] libmachine: domain functional-447073 has defined MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:27:09.050655  381848 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:71:c8", ip: ""} in network mk-functional-447073: {Iface:virbr1 ExpiryTime:2025-10-25 10:23:33 +0000 UTC Type:0 Mac:52:54:00:28:71:c8 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-447073 Clientid:01:52:54:00:28:71:c8}
I1025 09:27:09.050686  381848 main.go:141] libmachine: domain functional-447073 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:27:09.050845  381848 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/functional-447073/id_rsa Username:docker}
I1025 09:27:09.130953  381848 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-447073 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test │ functional-447073 │ f53039ca89b63 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ c3994bc696102 │ 88MB   │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ c80c8dbafe7dd │ 74.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ 7dd6aaa1717ab │ 52.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ fc25172553d79 │ 71.9MB │
│ docker.io/kicbase/echo-server               │ functional-447073 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-447073 image ls --format table --alsologtostderr:
I1025 09:27:09.389650  381870 out.go:360] Setting OutFile to fd 1 ...
I1025 09:27:09.389953  381870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:27:09.389964  381870 out.go:374] Setting ErrFile to fd 2...
I1025 09:27:09.389968  381870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:27:09.390144  381870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
I1025 09:27:09.390762  381870 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:27:09.390859  381870 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:27:09.392815  381870 ssh_runner.go:195] Run: systemctl --version
I1025 09:27:09.394922  381870 main.go:141] libmachine: domain functional-447073 has defined MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:27:09.395290  381870 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:71:c8", ip: ""} in network mk-functional-447073: {Iface:virbr1 ExpiryTime:2025-10-25 10:23:33 +0000 UTC Type:0 Mac:52:54:00:28:71:c8 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-447073 Clientid:01:52:54:00:28:71:c8}
I1025 09:27:09.395318  381870 main.go:141] libmachine: domain functional-447073 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:27:09.395444  381870 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/functional-447073/id_rsa Username:docker}
I1025 09:27:09.474302  381870 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-447073 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-447073","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"f53039ca89b6381668cc5b061428fe3b96c48d24219942990c3470d911f2d1bb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-447073"],"size":"30"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"88000000"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"71900000"},{"id
":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"74900000"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe163
79ee9b6cb813","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"52800000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-447073 image ls --format json --alsologtostderr:
I1025 09:27:09.219070  381859 out.go:360] Setting OutFile to fd 1 ...
I1025 09:27:09.219337  381859 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:27:09.219346  381859 out.go:374] Setting ErrFile to fd 2...
I1025 09:27:09.219349  381859 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:27:09.219534  381859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
I1025 09:27:09.220111  381859 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:27:09.220220  381859 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:27:09.222581  381859 ssh_runner.go:195] Run: systemctl --version
I1025 09:27:09.225041  381859 main.go:141] libmachine: domain functional-447073 has defined MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:27:09.225500  381859 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:71:c8", ip: ""} in network mk-functional-447073: {Iface:virbr1 ExpiryTime:2025-10-25 10:23:33 +0000 UTC Type:0 Mac:52:54:00:28:71:c8 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-447073 Clientid:01:52:54:00:28:71:c8}
I1025 09:27:09.225529  381859 main.go:141] libmachine: domain functional-447073 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:27:09.225725  381859 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/functional-447073/id_rsa Username:docker}
I1025 09:27:09.304753  381859 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-447073 image ls --format yaml --alsologtostderr:
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-447073
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "52800000"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "88000000"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "71900000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f53039ca89b6381668cc5b061428fe3b96c48d24219942990c3470d911f2d1bb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-447073
size: "30"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "74900000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-447073 image ls --format yaml --alsologtostderr:
I1025 09:27:09.569352  381881 out.go:360] Setting OutFile to fd 1 ...
I1025 09:27:09.569714  381881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:27:09.569724  381881 out.go:374] Setting ErrFile to fd 2...
I1025 09:27:09.569731  381881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:27:09.570050  381881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
I1025 09:27:09.570694  381881 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:27:09.570792  381881 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:27:09.572713  381881 ssh_runner.go:195] Run: systemctl --version
I1025 09:27:09.574729  381881 main.go:141] libmachine: domain functional-447073 has defined MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:27:09.575046  381881 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:71:c8", ip: ""} in network mk-functional-447073: {Iface:virbr1 ExpiryTime:2025-10-25 10:23:33 +0000 UTC Type:0 Mac:52:54:00:28:71:c8 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-447073 Clientid:01:52:54:00:28:71:c8}
I1025 09:27:09.575072  381881 main.go:141] libmachine: domain functional-447073 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:27:09.575197  381881 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/functional-447073/id_rsa Username:docker}
I1025 09:27:09.655243  381881 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-447073 ssh pgrep buildkitd: exit status 1 (155.210418ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image build -t localhost/my-image:functional-447073 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-447073 image build -t localhost/my-image:functional-447073 testdata/build --alsologtostderr: (3.68152728s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-447073 image build -t localhost/my-image:functional-447073 testdata/build --alsologtostderr:
I1025 09:27:09.897602  381903 out.go:360] Setting OutFile to fd 1 ...
I1025 09:27:09.897860  381903 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:27:09.897870  381903 out.go:374] Setting ErrFile to fd 2...
I1025 09:27:09.897874  381903 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:27:09.898059  381903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
I1025 09:27:09.898667  381903 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:27:09.899351  381903 config.go:182] Loaded profile config "functional-447073": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:27:09.901328  381903 ssh_runner.go:195] Run: systemctl --version
I1025 09:27:09.903671  381903 main.go:141] libmachine: domain functional-447073 has defined MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:27:09.904105  381903 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:71:c8", ip: ""} in network mk-functional-447073: {Iface:virbr1 ExpiryTime:2025-10-25 10:23:33 +0000 UTC Type:0 Mac:52:54:00:28:71:c8 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-447073 Clientid:01:52:54:00:28:71:c8}
I1025 09:27:09.904135  381903 main.go:141] libmachine: domain functional-447073 has defined IP address 192.168.39.191 and MAC address 52:54:00:28:71:c8 in network mk-functional-447073
I1025 09:27:09.904314  381903 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/functional-447073/id_rsa Username:docker}
I1025 09:27:09.985652  381903 build_images.go:161] Building image from path: /tmp/build.1036381761.tar
I1025 09:27:09.985722  381903 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 09:27:09.999406  381903 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1036381761.tar
I1025 09:27:10.004432  381903 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1036381761.tar: stat -c "%s %y" /var/lib/minikube/build/build.1036381761.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1036381761.tar': No such file or directory
I1025 09:27:10.004476  381903 ssh_runner.go:362] scp /tmp/build.1036381761.tar --> /var/lib/minikube/build/build.1036381761.tar (3072 bytes)
I1025 09:27:10.036251  381903 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1036381761
I1025 09:27:10.048727  381903 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1036381761 -xf /var/lib/minikube/build/build.1036381761.tar
I1025 09:27:10.061122  381903 docker.go:361] Building image: /var/lib/minikube/build/build.1036381761
I1025 09:27:10.061212  381903 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-447073 /var/lib/minikube/build/build.1036381761
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:79a65a8f433ed128a5cdfb86ce5c6703c06a1bdc6a89203771b27770f06615ee
#8 writing image sha256:79a65a8f433ed128a5cdfb86ce5c6703c06a1bdc6a89203771b27770f06615ee done
#8 naming to localhost/my-image:functional-447073 done
#8 DONE 0.1s
I1025 09:27:13.486391  381903 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-447073 /var/lib/minikube/build/build.1036381761: (3.425117245s)
I1025 09:27:13.486499  381903 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1036381761
I1025 09:27:13.503276  381903 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1036381761.tar
I1025 09:27:13.515269  381903 build_images.go:217] Built localhost/my-image:functional-447073 from /tmp/build.1036381761.tar
I1025 09:27:13.515306  381903 build_images.go:133] succeeded building to: functional-447073
I1025 09:27:13.515310  381903 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.727880533s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-447073
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdany-port1206304397/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761384409039157579" to /tmp/TestFunctionalparallelMountCmdany-port1206304397/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761384409039157579" to /tmp/TestFunctionalparallelMountCmdany-port1206304397/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761384409039157579" to /tmp/TestFunctionalparallelMountCmdany-port1206304397/001/test-1761384409039157579
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (183.335287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:26:49.222894  371331 retry.go:31] will retry after 622.567694ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 09:26 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 09:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 09:26 test-1761384409039157579
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh cat /mount-9p/test-1761384409039157579
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-447073 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [641408b6-670b-4f2a-92ce-98b6828337ca] Pending
helpers_test.go:352: "busybox-mount" [641408b6-670b-4f2a-92ce-98b6828337ca] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [641408b6-670b-4f2a-92ce-98b6828337ca] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [641408b6-670b-4f2a-92ce-98b6828337ca] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004123392s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-447073 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdany-port1206304397/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "237.178907ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "70.108143ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "249.245164ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "74.186442ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image load --daemon kicbase/echo-server:functional-447073 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image load --daemon kicbase/echo-server:functional-447073 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-447073
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image load --daemon kicbase/echo-server:functional-447073 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image save kicbase/echo-server:functional-447073 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image rm kicbase/echo-server:functional-447073 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-447073
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 image save --daemon kicbase/echo-server:functional-447073 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-447073
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-447073 docker-env) && out/minikube-linux-amd64 status -p functional-447073"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-447073 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdspecific-port1452797215/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (174.489222ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:26:56.456578  371331 retry.go:31] will retry after 557.856708ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdspecific-port1452797215/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-447073 ssh "sudo umount -f /mount-9p": exit status 1 (169.082956ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-447073 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdspecific-port1452797215/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 service list -o json
functional_test.go:1504: Took "440.508292ms" to run "out/minikube-linux-amd64 -p functional-447073 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.191:31694
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 update-context --alsologtostderr -v=2
E1025 09:28:20.415081  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:30:36.550736  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:31:04.256845  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.191:31694
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T" /mount1: exit status 1 (174.004186ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:26:57.915550  371331 retry.go:31] will retry after 634.518519ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-447073 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-447073 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-447073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1167561909/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-447073
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-447073
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-447073
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (231.62s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-130661 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-130661 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m25.257018856s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-130661 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-130661 cache add gcr.io/k8s-minikube/gvisor-addon:2: (4.49936345s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-130661 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-130661 addons enable gvisor: (4.630078661s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:352: "gvisor" [a78b5eea-9c1a-4270-b930-e9a2f25dffeb] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004145137s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-130661 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:352: "nginx-gvisor" [e4efce67-e307-48c4-ba70-74eeccc674d9] Pending
helpers_test.go:352: "nginx-gvisor" [e4efce67-e307-48c4-ba70-74eeccc674d9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-gvisor" [e4efce67-e307-48c4-ba70-74eeccc674d9] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 55.004868292s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-130661
E1025 10:15:19.624140  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-130661: (7.022250024s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-130661 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-130661 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (57.04822724s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:352: "gvisor" [a78b5eea-9c1a-4270-b930-e9a2f25dffeb] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005380536s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:352: "nginx-gvisor" [e4efce67-e307-48c4-ba70-74eeccc674d9] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004849947s
helpers_test.go:175: Cleaning up "gvisor-130661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-130661
--- PASS: TestGvisorAddon (231.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (217.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 
E1025 09:40:36.550815  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 : (3m37.126218857s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (217.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 kubectl -- rollout status deployment/busybox: (3.875030217s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-75lzj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-nv9lx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-qtx9q -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-75lzj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-nv9lx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-qtx9q -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-75lzj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-nv9lx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-qtx9q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-75lzj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-75lzj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-nv9lx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-nv9lx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-qtx9q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 kubectl -- exec busybox-7b57f96db7-qtx9q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 node add --alsologtostderr -v 5: (49.774320163s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-815670 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp testdata/cp-test.txt ha-815670:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile338208031/001/cp-test_ha-815670.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670:/home/docker/cp-test.txt ha-815670-m02:/home/docker/cp-test_ha-815670_ha-815670-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m02 "sudo cat /home/docker/cp-test_ha-815670_ha-815670-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670:/home/docker/cp-test.txt ha-815670-m03:/home/docker/cp-test_ha-815670_ha-815670-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m03 "sudo cat /home/docker/cp-test_ha-815670_ha-815670-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670:/home/docker/cp-test.txt ha-815670-m04:/home/docker/cp-test_ha-815670_ha-815670-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m04 "sudo cat /home/docker/cp-test_ha-815670_ha-815670-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp testdata/cp-test.txt ha-815670-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile338208031/001/cp-test_ha-815670-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m02:/home/docker/cp-test.txt ha-815670:/home/docker/cp-test_ha-815670-m02_ha-815670.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670 "sudo cat /home/docker/cp-test_ha-815670-m02_ha-815670.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m02:/home/docker/cp-test.txt ha-815670-m03:/home/docker/cp-test_ha-815670-m02_ha-815670-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m03 "sudo cat /home/docker/cp-test_ha-815670-m02_ha-815670-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m02:/home/docker/cp-test.txt ha-815670-m04:/home/docker/cp-test_ha-815670-m02_ha-815670-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m04 "sudo cat /home/docker/cp-test_ha-815670-m02_ha-815670-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp testdata/cp-test.txt ha-815670-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile338208031/001/cp-test_ha-815670-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m03:/home/docker/cp-test.txt ha-815670:/home/docker/cp-test_ha-815670-m03_ha-815670.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670 "sudo cat /home/docker/cp-test_ha-815670-m03_ha-815670.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m03:/home/docker/cp-test.txt ha-815670-m02:/home/docker/cp-test_ha-815670-m03_ha-815670-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m02 "sudo cat /home/docker/cp-test_ha-815670-m03_ha-815670-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m03:/home/docker/cp-test.txt ha-815670-m04:/home/docker/cp-test_ha-815670-m03_ha-815670-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m04 "sudo cat /home/docker/cp-test_ha-815670-m03_ha-815670-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp testdata/cp-test.txt ha-815670-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile338208031/001/cp-test_ha-815670-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m04:/home/docker/cp-test.txt ha-815670:/home/docker/cp-test_ha-815670-m04_ha-815670.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670 "sudo cat /home/docker/cp-test_ha-815670-m04_ha-815670.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m04:/home/docker/cp-test.txt ha-815670-m02:/home/docker/cp-test_ha-815670-m04_ha-815670-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m04 "sudo cat /home/docker/cp-test.txt"
E1025 09:41:47.893701  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:47.900248  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:47.911720  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:47.933180  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:47.974653  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m02 "sudo cat /home/docker/cp-test_ha-815670-m04_ha-815670-m02.txt"
E1025 09:41:48.056344  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 cp ha-815670-m04:/home/docker/cp-test.txt ha-815670-m03:/home/docker/cp-test_ha-815670-m04_ha-815670-m03.txt
E1025 09:41:48.217942  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m04 "sudo cat /home/docker/cp-test.txt"
E1025 09:41:48.540243  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 ssh -n ha-815670-m03 "sudo cat /home/docker/cp-test_ha-815670-m04_ha-815670-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 node stop m02 --alsologtostderr -v 5
E1025 09:41:49.182134  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:50.463566  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:53.024964  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:58.147362  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:59.619259  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 node stop m02 --alsologtostderr -v 5: (11.840065131s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-815670 status --alsologtostderr -v 5: exit status 7 (493.482342ms)

                                                
                                                
-- stdout --
	ha-815670
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-815670-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-815670-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-815670-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:42:00.690675  386883 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:42:00.690954  386883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:42:00.690963  386883 out.go:374] Setting ErrFile to fd 2...
	I1025 09:42:00.690966  386883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:42:00.691171  386883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 09:42:00.691361  386883 out.go:368] Setting JSON to false
	I1025 09:42:00.691396  386883 mustload.go:65] Loading cluster: ha-815670
	I1025 09:42:00.691517  386883 notify.go:220] Checking for updates...
	I1025 09:42:00.691781  386883 config.go:182] Loaded profile config "ha-815670": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:42:00.691796  386883 status.go:174] checking status of ha-815670 ...
	I1025 09:42:00.694004  386883 status.go:371] ha-815670 host status = "Running" (err=<nil>)
	I1025 09:42:00.694023  386883 host.go:66] Checking if "ha-815670" exists ...
	I1025 09:42:00.696765  386883 main.go:141] libmachine: domain ha-815670 has defined MAC address 52:54:00:b8:56:3a in network mk-ha-815670
	I1025 09:42:00.697252  386883 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:56:3a", ip: ""} in network mk-ha-815670: {Iface:virbr1 ExpiryTime:2025-10-25 10:37:16 +0000 UTC Type:0 Mac:52:54:00:b8:56:3a Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-815670 Clientid:01:52:54:00:b8:56:3a}
	I1025 09:42:00.697287  386883 main.go:141] libmachine: domain ha-815670 has defined IP address 192.168.39.184 and MAC address 52:54:00:b8:56:3a in network mk-ha-815670
	I1025 09:42:00.697441  386883 host.go:66] Checking if "ha-815670" exists ...
	I1025 09:42:00.697691  386883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:42:00.699712  386883 main.go:141] libmachine: domain ha-815670 has defined MAC address 52:54:00:b8:56:3a in network mk-ha-815670
	I1025 09:42:00.700167  386883 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:56:3a", ip: ""} in network mk-ha-815670: {Iface:virbr1 ExpiryTime:2025-10-25 10:37:16 +0000 UTC Type:0 Mac:52:54:00:b8:56:3a Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-815670 Clientid:01:52:54:00:b8:56:3a}
	I1025 09:42:00.700200  386883 main.go:141] libmachine: domain ha-815670 has defined IP address 192.168.39.184 and MAC address 52:54:00:b8:56:3a in network mk-ha-815670
	I1025 09:42:00.700374  386883 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/ha-815670/id_rsa Username:docker}
	I1025 09:42:00.790012  386883 ssh_runner.go:195] Run: systemctl --version
	I1025 09:42:00.796338  386883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:42:00.814651  386883 kubeconfig.go:125] found "ha-815670" server: "https://192.168.39.254:8443"
	I1025 09:42:00.814695  386883 api_server.go:166] Checking apiserver status ...
	I1025 09:42:00.814794  386883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:42:00.836634  386883 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2521/cgroup
	W1025 09:42:00.849991  386883 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2521/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:42:00.850051  386883 ssh_runner.go:195] Run: ls
	I1025 09:42:00.855253  386883 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1025 09:42:00.863923  386883 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1025 09:42:00.863949  386883 status.go:463] ha-815670 apiserver status = Running (err=<nil>)
	I1025 09:42:00.863960  386883 status.go:176] ha-815670 status: &{Name:ha-815670 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:42:00.863975  386883 status.go:174] checking status of ha-815670-m02 ...
	I1025 09:42:00.865622  386883 status.go:371] ha-815670-m02 host status = "Stopped" (err=<nil>)
	I1025 09:42:00.865641  386883 status.go:384] host is not running, skipping remaining checks
	I1025 09:42:00.865646  386883 status.go:176] ha-815670-m02 status: &{Name:ha-815670-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:42:00.865661  386883 status.go:174] checking status of ha-815670-m03 ...
	I1025 09:42:00.866785  386883 status.go:371] ha-815670-m03 host status = "Running" (err=<nil>)
	I1025 09:42:00.866804  386883 host.go:66] Checking if "ha-815670-m03" exists ...
	I1025 09:42:00.869505  386883 main.go:141] libmachine: domain ha-815670-m03 has defined MAC address 52:54:00:bc:08:f1 in network mk-ha-815670
	I1025 09:42:00.869947  386883 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:08:f1", ip: ""} in network mk-ha-815670: {Iface:virbr1 ExpiryTime:2025-10-25 10:39:25 +0000 UTC Type:0 Mac:52:54:00:bc:08:f1 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-815670-m03 Clientid:01:52:54:00:bc:08:f1}
	I1025 09:42:00.869972  386883 main.go:141] libmachine: domain ha-815670-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:bc:08:f1 in network mk-ha-815670
	I1025 09:42:00.870123  386883 host.go:66] Checking if "ha-815670-m03" exists ...
	I1025 09:42:00.870390  386883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:42:00.872667  386883 main.go:141] libmachine: domain ha-815670-m03 has defined MAC address 52:54:00:bc:08:f1 in network mk-ha-815670
	I1025 09:42:00.873014  386883 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:08:f1", ip: ""} in network mk-ha-815670: {Iface:virbr1 ExpiryTime:2025-10-25 10:39:25 +0000 UTC Type:0 Mac:52:54:00:bc:08:f1 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-815670-m03 Clientid:01:52:54:00:bc:08:f1}
	I1025 09:42:00.873050  386883 main.go:141] libmachine: domain ha-815670-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:bc:08:f1 in network mk-ha-815670
	I1025 09:42:00.873239  386883 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/ha-815670-m03/id_rsa Username:docker}
	I1025 09:42:00.950830  386883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:42:00.968513  386883 kubeconfig.go:125] found "ha-815670" server: "https://192.168.39.254:8443"
	I1025 09:42:00.968573  386883 api_server.go:166] Checking apiserver status ...
	I1025 09:42:00.968621  386883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:42:00.987683  386883 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2400/cgroup
	W1025 09:42:00.999473  386883 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2400/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:42:00.999532  386883 ssh_runner.go:195] Run: ls
	I1025 09:42:01.004647  386883 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1025 09:42:01.009893  386883 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1025 09:42:01.009917  386883 status.go:463] ha-815670-m03 apiserver status = Running (err=<nil>)
	I1025 09:42:01.009926  386883 status.go:176] ha-815670-m03 status: &{Name:ha-815670-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:42:01.009941  386883 status.go:174] checking status of ha-815670-m04 ...
	I1025 09:42:01.011656  386883 status.go:371] ha-815670-m04 host status = "Running" (err=<nil>)
	I1025 09:42:01.011674  386883 host.go:66] Checking if "ha-815670-m04" exists ...
	I1025 09:42:01.014695  386883 main.go:141] libmachine: domain ha-815670-m04 has defined MAC address 52:54:00:09:0e:df in network mk-ha-815670
	I1025 09:42:01.015093  386883 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0e:df", ip: ""} in network mk-ha-815670: {Iface:virbr1 ExpiryTime:2025-10-25 10:41:02 +0000 UTC Type:0 Mac:52:54:00:09:0e:df Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-815670-m04 Clientid:01:52:54:00:09:0e:df}
	I1025 09:42:01.015128  386883 main.go:141] libmachine: domain ha-815670-m04 has defined IP address 192.168.39.142 and MAC address 52:54:00:09:0e:df in network mk-ha-815670
	I1025 09:42:01.015275  386883 host.go:66] Checking if "ha-815670-m04" exists ...
	I1025 09:42:01.015486  386883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:42:01.017472  386883 main.go:141] libmachine: domain ha-815670-m04 has defined MAC address 52:54:00:09:0e:df in network mk-ha-815670
	I1025 09:42:01.017778  386883 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0e:df", ip: ""} in network mk-ha-815670: {Iface:virbr1 ExpiryTime:2025-10-25 10:41:02 +0000 UTC Type:0 Mac:52:54:00:09:0e:df Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-815670-m04 Clientid:01:52:54:00:09:0e:df}
	I1025 09:42:01.017794  386883 main.go:141] libmachine: domain ha-815670-m04 has defined IP address 192.168.39.142 and MAC address 52:54:00:09:0e:df in network mk-ha-815670
	I1025 09:42:01.017895  386883 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/ha-815670-m04/id_rsa Username:docker}
	I1025 09:42:01.102480  386883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:42:01.118852  386883 status.go:176] ha-815670-m04 status: &{Name:ha-815670-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 node start m02 --alsologtostderr -v 5
E1025 09:42:08.389459  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 node start m02 --alsologtostderr -v 5: (24.691554434s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (165.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 stop --alsologtostderr -v 5
E1025 09:42:28.871860  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:09.834521  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 stop --alsologtostderr -v 5: (42.043081553s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 start --wait true --alsologtostderr -v 5
E1025 09:44:31.755988  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 start --wait true --alsologtostderr -v 5: (2m3.431233246s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (165.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 node delete m03 --alsologtostderr -v 5: (6.461222151s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 stop --alsologtostderr -v 5
E1025 09:45:36.551266  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 stop --alsologtostderr -v 5: (41.925931883s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-815670 status --alsologtostderr -v 5: exit status 7 (68.754856ms)

                                                
                                                
-- stdout --
	ha-815670
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-815670-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-815670-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:46:03.171534  388467 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:46:03.171862  388467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:03.171875  388467 out.go:374] Setting ErrFile to fd 2...
	I1025 09:46:03.171881  388467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:03.172110  388467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 09:46:03.172329  388467 out.go:368] Setting JSON to false
	I1025 09:46:03.172374  388467 mustload.go:65] Loading cluster: ha-815670
	I1025 09:46:03.172529  388467 notify.go:220] Checking for updates...
	I1025 09:46:03.172832  388467 config.go:182] Loaded profile config "ha-815670": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:46:03.172850  388467 status.go:174] checking status of ha-815670 ...
	I1025 09:46:03.174787  388467 status.go:371] ha-815670 host status = "Stopped" (err=<nil>)
	I1025 09:46:03.174808  388467 status.go:384] host is not running, skipping remaining checks
	I1025 09:46:03.174815  388467 status.go:176] ha-815670 status: &{Name:ha-815670 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:46:03.174836  388467 status.go:174] checking status of ha-815670-m02 ...
	I1025 09:46:03.176336  388467 status.go:371] ha-815670-m02 host status = "Stopped" (err=<nil>)
	I1025 09:46:03.176351  388467 status.go:384] host is not running, skipping remaining checks
	I1025 09:46:03.176357  388467 status.go:176] ha-815670-m02 status: &{Name:ha-815670-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:46:03.176370  388467 status.go:174] checking status of ha-815670-m04 ...
	I1025 09:46:03.177673  388467 status.go:371] ha-815670-m04 host status = "Stopped" (err=<nil>)
	I1025 09:46:03.177688  388467 status.go:384] host is not running, skipping remaining checks
	I1025 09:46:03.177693  388467 status.go:176] ha-815670-m04 status: &{Name:ha-815670-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (119.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 start --wait true --alsologtostderr -v 5 --driver=kvm2 
E1025 09:46:47.894662  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:47:15.598383  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 start --wait true --alsologtostderr -v 5 --driver=kvm2 : (1m58.581923951s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (119.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (117.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-815670 node add --control-plane --alsologtostderr -v 5: (1m57.002510222s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-815670 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (117.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (40.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-074318 --driver=kvm2 
E1025 09:50:36.558246  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-074318 --driver=kvm2 : (40.06095009s)
--- PASS: TestImageBuild/serial/Setup (40.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-074318
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-074318: (2.013206442s)
--- PASS: TestImageBuild/serial/NormalBuild (2.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-074318
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-074318
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-074318
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-417869 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 
E1025 09:51:47.894153  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-417869 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 : (1m20.455654309s)
--- PASS: TestJSONOutput/start/Command (80.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-417869 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-417869 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-417869 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-417869 --output=json --user=testUser: (13.690678103s)
--- PASS: TestJSONOutput/stop/Command (13.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-712296 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-712296 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.941324ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8de12f69-81c4-46e4-829e-9a764fca0421","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-712296] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9336ec65-db6a-4086-a5bc-dd0eaa403bf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"d9424f3e-133b-43b1-9605-b435602293be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a2224375-db02-429b-901c-6adf563ff69b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig"}}
	{"specversion":"1.0","id":"bd668a22-8fda-4b49-874b-6ad0fc1420af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube"}}
	{"specversion":"1.0","id":"1c36afb4-7acc-4f6b-a2dc-aca73b7db165","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"767e42c0-b6c4-4dae-ae39-58b5f5fc3040","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d0ad306c-8e98-4e63-a998-60d8484af1dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-712296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-712296
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (89.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-277575 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-277575 --driver=kvm2 : (42.780329966s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-281185 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-281185 --driver=kvm2 : (43.637341807s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-277575
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-281185
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-281185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-281185
helpers_test.go:175: Cleaning up "first-277575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-277575
--- PASS: TestMinikubeProfile (89.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-666024 --memory=3072 --mount-string /tmp/TestMountStartserial940297785/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-666024 --memory=3072 --mount-string /tmp/TestMountStartserial940297785/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (21.672847111s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-666024 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-666024 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-685612 --memory=3072 --mount-string /tmp/TestMountStartserial940297785/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-685612 --memory=3072 --mount-string /tmp/TestMountStartserial940297785/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (22.953228045s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-685612 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-685612 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-666024 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-685612 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-685612 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-685612
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-685612: (1.276719386s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-685612
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-685612: (19.17980117s)
--- PASS: TestMountStart/serial/RestartStopped (20.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-685612 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-685612 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026340 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 
E1025 09:55:36.551237  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:47.893923  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026340 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 : (1m51.190457306s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-026340 -- rollout status deployment/busybox: (3.180950107s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- exec busybox-7b57f96db7-6cvbv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- exec busybox-7b57f96db7-rx594 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- exec busybox-7b57f96db7-6cvbv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- exec busybox-7b57f96db7-rx594 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- exec busybox-7b57f96db7-6cvbv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- exec busybox-7b57f96db7-rx594 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- exec busybox-7b57f96db7-6cvbv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- exec busybox-7b57f96db7-6cvbv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- exec busybox-7b57f96db7-rx594 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026340 -- exec busybox-7b57f96db7-rx594 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-026340 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-026340 -v=5 --alsologtostderr: (46.163544844s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.60s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-026340 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp testdata/cp-test.txt multinode-026340:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp multinode-026340:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile952168969/001/cp-test_multinode-026340.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp multinode-026340:/home/docker/cp-test.txt multinode-026340-m02:/home/docker/cp-test_multinode-026340_multinode-026340-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m02 "sudo cat /home/docker/cp-test_multinode-026340_multinode-026340-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp multinode-026340:/home/docker/cp-test.txt multinode-026340-m03:/home/docker/cp-test_multinode-026340_multinode-026340-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m03 "sudo cat /home/docker/cp-test_multinode-026340_multinode-026340-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp testdata/cp-test.txt multinode-026340-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp multinode-026340-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile952168969/001/cp-test_multinode-026340-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp multinode-026340-m02:/home/docker/cp-test.txt multinode-026340:/home/docker/cp-test_multinode-026340-m02_multinode-026340.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340 "sudo cat /home/docker/cp-test_multinode-026340-m02_multinode-026340.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp multinode-026340-m02:/home/docker/cp-test.txt multinode-026340-m03:/home/docker/cp-test_multinode-026340-m02_multinode-026340-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m03 "sudo cat /home/docker/cp-test_multinode-026340-m02_multinode-026340-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp testdata/cp-test.txt multinode-026340-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp multinode-026340-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile952168969/001/cp-test_multinode-026340-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp multinode-026340-m03:/home/docker/cp-test.txt multinode-026340:/home/docker/cp-test_multinode-026340-m03_multinode-026340.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340 "sudo cat /home/docker/cp-test_multinode-026340-m03_multinode-026340.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 cp multinode-026340-m03:/home/docker/cp-test.txt multinode-026340-m02:/home/docker/cp-test_multinode-026340-m03_multinode-026340-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 ssh -n multinode-026340-m02 "sudo cat /home/docker/cp-test_multinode-026340-m03_multinode-026340-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-026340 node stop m03: (1.839671343s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026340 status: exit status 7 (327.915906ms)

                                                
                                                
-- stdout --
	multinode-026340
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-026340-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-026340-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026340 status --alsologtostderr: exit status 7 (338.134679ms)

                                                
                                                
-- stdout --
	multinode-026340
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-026340-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-026340-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:57:58.739625  394815 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:57:58.739867  394815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:57:58.739875  394815 out.go:374] Setting ErrFile to fd 2...
	I1025 09:57:58.739879  394815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:57:58.740081  394815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 09:57:58.740255  394815 out.go:368] Setting JSON to false
	I1025 09:57:58.740296  394815 mustload.go:65] Loading cluster: multinode-026340
	I1025 09:57:58.740418  394815 notify.go:220] Checking for updates...
	I1025 09:57:58.740673  394815 config.go:182] Loaded profile config "multinode-026340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:57:58.740688  394815 status.go:174] checking status of multinode-026340 ...
	I1025 09:57:58.742617  394815 status.go:371] multinode-026340 host status = "Running" (err=<nil>)
	I1025 09:57:58.742635  394815 host.go:66] Checking if "multinode-026340" exists ...
	I1025 09:57:58.745176  394815 main.go:141] libmachine: domain multinode-026340 has defined MAC address 52:54:00:41:64:13 in network mk-multinode-026340
	I1025 09:57:58.745692  394815 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:64:13", ip: ""} in network mk-multinode-026340: {Iface:virbr1 ExpiryTime:2025-10-25 10:55:20 +0000 UTC Type:0 Mac:52:54:00:41:64:13 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-026340 Clientid:01:52:54:00:41:64:13}
	I1025 09:57:58.745728  394815 main.go:141] libmachine: domain multinode-026340 has defined IP address 192.168.39.180 and MAC address 52:54:00:41:64:13 in network mk-multinode-026340
	I1025 09:57:58.745906  394815 host.go:66] Checking if "multinode-026340" exists ...
	I1025 09:57:58.746210  394815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:57:58.748839  394815 main.go:141] libmachine: domain multinode-026340 has defined MAC address 52:54:00:41:64:13 in network mk-multinode-026340
	I1025 09:57:58.749302  394815 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:64:13", ip: ""} in network mk-multinode-026340: {Iface:virbr1 ExpiryTime:2025-10-25 10:55:20 +0000 UTC Type:0 Mac:52:54:00:41:64:13 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-026340 Clientid:01:52:54:00:41:64:13}
	I1025 09:57:58.749348  394815 main.go:141] libmachine: domain multinode-026340 has defined IP address 192.168.39.180 and MAC address 52:54:00:41:64:13 in network mk-multinode-026340
	I1025 09:57:58.749513  394815 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/multinode-026340/id_rsa Username:docker}
	I1025 09:57:58.838515  394815 ssh_runner.go:195] Run: systemctl --version
	I1025 09:57:58.844915  394815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:57:58.862916  394815 kubeconfig.go:125] found "multinode-026340" server: "https://192.168.39.180:8443"
	I1025 09:57:58.862957  394815 api_server.go:166] Checking apiserver status ...
	I1025 09:57:58.862999  394815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:57:58.889535  394815 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2330/cgroup
	W1025 09:57:58.901151  394815 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2330/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:57:58.901229  394815 ssh_runner.go:195] Run: ls
	I1025 09:57:58.905983  394815 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I1025 09:57:58.910898  394815 api_server.go:279] https://192.168.39.180:8443/healthz returned 200:
	ok
	I1025 09:57:58.910924  394815 status.go:463] multinode-026340 apiserver status = Running (err=<nil>)
	I1025 09:57:58.910935  394815 status.go:176] multinode-026340 status: &{Name:multinode-026340 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:57:58.910953  394815 status.go:174] checking status of multinode-026340-m02 ...
	I1025 09:57:58.912629  394815 status.go:371] multinode-026340-m02 host status = "Running" (err=<nil>)
	I1025 09:57:58.912656  394815 host.go:66] Checking if "multinode-026340-m02" exists ...
	I1025 09:57:58.915208  394815 main.go:141] libmachine: domain multinode-026340-m02 has defined MAC address 52:54:00:e0:20:6a in network mk-multinode-026340
	I1025 09:57:58.915646  394815 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e0:20:6a", ip: ""} in network mk-multinode-026340: {Iface:virbr1 ExpiryTime:2025-10-25 10:56:23 +0000 UTC Type:0 Mac:52:54:00:e0:20:6a Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-026340-m02 Clientid:01:52:54:00:e0:20:6a}
	I1025 09:57:58.915668  394815 main.go:141] libmachine: domain multinode-026340-m02 has defined IP address 192.168.39.250 and MAC address 52:54:00:e0:20:6a in network mk-multinode-026340
	I1025 09:57:58.915835  394815 host.go:66] Checking if "multinode-026340-m02" exists ...
	I1025 09:57:58.916029  394815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:57:58.917965  394815 main.go:141] libmachine: domain multinode-026340-m02 has defined MAC address 52:54:00:e0:20:6a in network mk-multinode-026340
	I1025 09:57:58.918386  394815 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e0:20:6a", ip: ""} in network mk-multinode-026340: {Iface:virbr1 ExpiryTime:2025-10-25 10:56:23 +0000 UTC Type:0 Mac:52:54:00:e0:20:6a Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-026340-m02 Clientid:01:52:54:00:e0:20:6a}
	I1025 09:57:58.918419  394815 main.go:141] libmachine: domain multinode-026340-m02 has defined IP address 192.168.39.250 and MAC address 52:54:00:e0:20:6a in network mk-multinode-026340
	I1025 09:57:58.918537  394815 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-367343/.minikube/machines/multinode-026340-m02/id_rsa Username:docker}
	I1025 09:57:58.998037  394815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:57:59.014400  394815 status.go:176] multinode-026340-m02 status: &{Name:multinode-026340-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:57:59.014470  394815 status.go:174] checking status of multinode-026340-m03 ...
	I1025 09:57:59.016386  394815 status.go:371] multinode-026340-m03 host status = "Stopped" (err=<nil>)
	I1025 09:57:59.016413  394815 status.go:384] host is not running, skipping remaining checks
	I1025 09:57:59.016420  394815 status.go:176] multinode-026340-m03 status: &{Name:multinode-026340-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 node start m03 -v=5 --alsologtostderr
E1025 09:58:10.962841  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:58:39.621891  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-026340 node start m03 -v=5 --alsologtostderr: (44.272083497s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (44.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (167.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026340
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-026340
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-026340: (28.816636083s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026340 --wait=true -v=5 --alsologtostderr
E1025 10:00:36.551027  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026340 --wait=true -v=5 --alsologtostderr: (2m18.953936468s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026340
--- PASS: TestMultiNode/serial/RestartKeepsNodes (167.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-026340 node delete m03: (1.683161035s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 stop
E1025 10:01:47.894353  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-026340 stop: (27.898967128s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026340 status: exit status 7 (67.496231ms)

                                                
                                                
-- stdout --
	multinode-026340
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-026340-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026340 status --alsologtostderr: exit status 7 (67.100064ms)

                                                
                                                
-- stdout --
	multinode-026340
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-026340-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:02:01.872839  396208 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:02:01.873136  396208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:02:01.873148  396208 out.go:374] Setting ErrFile to fd 2...
	I1025 10:02:01.873154  396208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:02:01.873408  396208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-367343/.minikube/bin
	I1025 10:02:01.873603  396208 out.go:368] Setting JSON to false
	I1025 10:02:01.873643  396208 mustload.go:65] Loading cluster: multinode-026340
	I1025 10:02:01.873762  396208 notify.go:220] Checking for updates...
	I1025 10:02:01.874089  396208 config.go:182] Loaded profile config "multinode-026340": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 10:02:01.874109  396208 status.go:174] checking status of multinode-026340 ...
	I1025 10:02:01.876406  396208 status.go:371] multinode-026340 host status = "Stopped" (err=<nil>)
	I1025 10:02:01.876425  396208 status.go:384] host is not running, skipping remaining checks
	I1025 10:02:01.876431  396208 status.go:176] multinode-026340 status: &{Name:multinode-026340 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:02:01.876470  396208 status.go:174] checking status of multinode-026340-m02 ...
	I1025 10:02:01.877719  396208 status.go:371] multinode-026340-m02 host status = "Stopped" (err=<nil>)
	I1025 10:02:01.877734  396208 status.go:384] host is not running, skipping remaining checks
	I1025 10:02:01.877739  396208 status.go:176] multinode-026340-m02 status: &{Name:multinode-026340-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026340 --wait=true -v=5 --alsologtostderr --driver=kvm2 
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026340 --wait=true -v=5 --alsologtostderr --driver=kvm2 : (1m29.290861322s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026340 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026340
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026340-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-026340-m02 --driver=kvm2 : exit status 14 (79.176889ms)

                                                
                                                
-- stdout --
	* [multinode-026340-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-026340-m02' is duplicated with machine name 'multinode-026340-m02' in profile 'multinode-026340'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026340-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026340-m03 --driver=kvm2 : (45.078351017s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-026340
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-026340: exit status 80 (205.70383ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-026340 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-026340-m03 already exists in multinode-026340-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-026340-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.27s)

                                                
                                    
x
+
TestPreload (150.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-460993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.0
E1025 10:05:36.550954  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-460993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.0: (1m30.181188745s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-460993 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-460993 image pull gcr.io/k8s-minikube/busybox: (2.066365447s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-460993
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-460993: (6.611000421s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-460993 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1025 10:06:47.894076  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-460993 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (50.655813132s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-460993 image list
helpers_test.go:175: Cleaning up "test-preload-460993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-460993
--- PASS: TestPreload (150.56s)

                                                
                                    
x
+
TestScheduledStopUnix (113.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-836643 --memory=3072 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-836643 --memory=3072 --driver=kvm2 : (41.911139354s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-836643 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-836643 -n scheduled-stop-836643
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-836643 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1025 10:07:32.260057  371331 retry.go:31] will retry after 69.842µs: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.261261  371331 retry.go:31] will retry after 202.395µs: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.262412  371331 retry.go:31] will retry after 215.079µs: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.263546  371331 retry.go:31] will retry after 213.287µs: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.264685  371331 retry.go:31] will retry after 666.563µs: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.265811  371331 retry.go:31] will retry after 681.235µs: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.266939  371331 retry.go:31] will retry after 1.168871ms: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.269139  371331 retry.go:31] will retry after 2.130085ms: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.272393  371331 retry.go:31] will retry after 1.770979ms: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.274605  371331 retry.go:31] will retry after 3.81022ms: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.278834  371331 retry.go:31] will retry after 6.306084ms: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.286078  371331 retry.go:31] will retry after 9.451425ms: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.296362  371331 retry.go:31] will retry after 8.596221ms: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.305672  371331 retry.go:31] will retry after 21.779891ms: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.327965  371331 retry.go:31] will retry after 38.360055ms: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
I1025 10:07:32.367246  371331 retry.go:31] will retry after 55.347926ms: open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/scheduled-stop-836643/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-836643 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-836643 -n scheduled-stop-836643
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-836643
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-836643 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-836643
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-836643: exit status 7 (64.291339ms)

                                                
                                                
-- stdout --
	scheduled-stop-836643
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-836643 -n scheduled-stop-836643
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-836643 -n scheduled-stop-836643: exit status 7 (61.821563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-836643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-836643
--- PASS: TestScheduledStopUnix (113.59s)

                                                
                                    
x
+
TestSkaffold (125.12s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4005762141 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-585177 --memory=3072 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-585177 --memory=3072 --driver=kvm2 : (41.448706338s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4005762141 run --minikube-profile skaffold-585177 --kube-context skaffold-585177 --status-check=true --port-forward=false --interactive=false
E1025 10:10:36.551687  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4005762141 run --minikube-profile skaffold-585177 --kube-context skaffold-585177 --status-check=true --port-forward=false --interactive=false: (1m8.185457293s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-74488c497d-4x9fw" [dbf1352f-179f-4117-951a-f5a440683092] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005067963s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-6c97959c4-bm5nc" [c1a482af-fa81-4a30-9047-4bf2b620717e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00459581s
helpers_test.go:175: Cleaning up "skaffold-585177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-585177
--- PASS: TestSkaffold (125.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (170.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2107703301 start -p running-upgrade-595347 --memory=3072 --vm-driver=kvm2 
E1025 10:11:47.894625  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2107703301 start -p running-upgrade-595347 --memory=3072 --vm-driver=kvm2 : (1m45.583650587s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-595347 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-595347 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (1m0.961402334s)
helpers_test.go:175: Cleaning up "running-upgrade-595347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-595347
--- PASS: TestRunningBinaryUpgrade (170.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (197.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-720616 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 
E1025 10:15:36.550978  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:36.669732  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:36.676206  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:36.687682  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:36.709235  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:36.750719  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:36.832282  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:36.993924  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:37.315710  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:37.957920  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:39.239632  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:41.802600  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:15:46.924362  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-720616 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 : (1m24.53710891s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-720616
E1025 10:16:58.610331  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:59.185479  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:59.191982  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:59.203482  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:59.225013  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:59.266532  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:59.348100  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:59.509747  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-720616: (3.185428646s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-720616 status --format={{.Host}}
E1025 10:16:59.831927  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-720616 status --format={{.Host}}: exit status 7 (88.183671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-720616 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2 
E1025 10:17:00.473536  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:01.755841  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:04.318365  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:09.439807  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-720616 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2 : (51.042289664s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-720616 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-720616 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-720616 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 106 (91.758023ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-720616] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-720616
	    minikube start -p kubernetes-upgrade-720616 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7206162 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-720616 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-720616 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-720616 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2 : (57.334729286s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-720616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-720616
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-720616: (1.010883783s)
--- PASS: TestKubernetesUpgrade (197.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-586342 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-586342 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 14 (89.624335ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-586342] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-367343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-367343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (69.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-019967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-019967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (1m9.751971575s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (69.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-586342 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-586342 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (1m29.015843883s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-586342 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-019967 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [079a2647-b585-4cd6-9b2b-e23b90a5f34b] Pending
helpers_test.go:352: "busybox" [079a2647-b585-4cd6-9b2b-e23b90a5f34b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [079a2647-b585-4cd6-9b2b-e23b90a5f34b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004213192s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-019967 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-019967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-019967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.170236875s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-019967 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-019967 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-019967 --alsologtostderr -v=3: (14.058048019s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-586342 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-586342 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (16.38050689s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-586342 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-586342 status -o json: exit status 2 (228.610494ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-586342","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-586342
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-586342: (1.002600043s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019967 -n old-k8s-version-019967
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019967 -n old-k8s-version-019967: exit status 7 (70.799736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-019967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-019967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-019967 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (46.506250578s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019967 -n old-k8s-version-019967
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-586342 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-586342 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (34.588372118s)
--- PASS: TestNoKubernetes/serial/Start (34.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-586342 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-586342 "sudo systemctl is-active --quiet service kubelet": exit status 1 (179.177679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (15.236302203s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h4pfc" [ce0ae58a-f2b9-4660-aa10-960f6e791450] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h4pfc" [ce0ae58a-f2b9-4660-aa10-960f6e791450] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.006898379s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h4pfc" [ce0ae58a-f2b9-4660-aa10-960f6e791450] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005818088s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-019967 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-019967 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-586342
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-586342: (1.575888483s)
--- PASS: TestNoKubernetes/serial/Stop (1.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (34.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-586342 --driver=kvm2 
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-586342 --driver=kvm2 : (34.045692329s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (34.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-586342 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-586342 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.183498ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E1025 10:16:17.648260  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/Setup (3.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.735381706 start -p stopped-upgrade-411608 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.735381706 start -p stopped-upgrade-411608 --memory=3072 --vm-driver=kvm2 : (57.454111494s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.735381706 -p stopped-upgrade-411608 stop
E1025 10:17:19.682221  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.735381706 -p stopped-upgrade-411608 stop: (13.083838253s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-411608 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
E1025 10:17:40.164376  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-411608 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (30.257466723s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.80s)

                                                
                                    
x
+
TestPause/serial/Start (100.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-471860 --memory=3072 --install-addons=false --wait=all --driver=kvm2 
E1025 10:16:47.893558  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-471860 --memory=3072 --install-addons=false --wait=all --driver=kvm2 : (1m40.735918221s)
--- PASS: TestPause/serial/Start (100.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-411608
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (96.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-699897 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-699897 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.1: (1m36.933065104s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (96.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (69.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-471860 --alsologtostderr -v=1 --driver=kvm2 
E1025 10:18:20.532427  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:21.126256  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-471860 --alsologtostderr -v=1 --driver=kvm2 : (1m9.026078265s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (69.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-585452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-585452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.1: (1m28.526262151s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.53s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-471860 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-471860 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-471860 --output=json --layout=cluster: exit status 2 (242.285958ms)

                                                
                                                
-- stdout --
	{"Name":"pause-471860","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-471860","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-471860 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.77s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-471860 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.77s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-471860 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1025 10:19:25.620859  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:35.863183  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.251512695s)
--- PASS: TestPause/serial/VerifyDeletedResources (15.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-699897 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [de433259-1667-4ed4-bf30-92c5c4b4adbb] Pending
helpers_test.go:352: "busybox" [de433259-1667-4ed4-bf30-92c5c4b4adbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [de433259-1667-4ed4-bf30-92c5c4b4adbb] Running
E1025 10:19:43.047827  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004970401s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-699897 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-676856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-676856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.1: (1m28.710755683s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-699897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-699897 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-699897 --alsologtostderr -v=3
E1025 10:19:56.344754  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-699897 --alsologtostderr -v=3: (14.147899032s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-699897 -n no-preload-699897
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-699897 -n no-preload-699897: exit status 7 (79.762455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-699897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-699897 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-699897 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.1: (51.343542514s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-699897 -n no-preload-699897
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-095242 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-095242 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.1: (1m12.000850899s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (72.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-585452 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8992e96e-1458-40d9-a082-29b7bcebc489] Pending
helpers_test.go:352: "busybox" [8992e96e-1458-40d9-a082-29b7bcebc489] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8992e96e-1458-40d9-a082-29b7bcebc489] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004923865s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-585452 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-585452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-585452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.707483375s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-585452 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-585452 --alsologtostderr -v=3
E1025 10:20:36.550876  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:20:36.669389  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:20:37.306553  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-585452 --alsologtostderr -v=3: (12.228649683s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-585452 -n embed-certs-585452
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-585452 -n embed-certs-585452: exit status 7 (89.772956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-585452 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-585452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-585452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.1: (47.789645869s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-585452 -n embed-certs-585452
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4c4l4" [45af09b3-2172-4f92-9d95-62e9fb02773c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4c4l4" [45af09b3-2172-4f92-9d95-62e9fb02773c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004402165s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4c4l4" [45af09b3-2172-4f92-9d95-62e9fb02773c] Running
E1025 10:21:04.374855  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006407076s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-699897 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-699897 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-699897 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-699897 -n no-preload-699897
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-699897 -n no-preload-699897: exit status 2 (249.515319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-699897 -n no-preload-699897
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-699897 -n no-preload-699897: exit status 2 (244.986588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-699897 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-699897 -n no-preload-699897
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-699897 -n no-preload-699897
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-676856 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ac116686-5df6-4bdf-9753-81c31fe0543f] Pending
helpers_test.go:352: "busybox" [ac116686-5df6-4bdf-9753-81c31fe0543f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ac116686-5df6-4bdf-9753-81c31fe0543f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 15.004796943s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-676856 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (59.713684136s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-095242 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-095242 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.079270701s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (14.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-095242 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-095242 --alsologtostderr -v=3: (14.339408983s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (14.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-676856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-676856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.029781282s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-676856 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-676856 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-676856 --alsologtostderr -v=3: (14.919060644s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-92npj" [04fa9012-6eaa-490e-b639-b391b0a511b2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-92npj" [04fa9012-6eaa-490e-b639-b391b0a511b2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.005315255s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-095242 -n newest-cni-095242
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-095242 -n newest-cni-095242: exit status 7 (66.4899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-095242 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-095242 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-095242 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.1: (39.789850637s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-095242 -n newest-cni-095242
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-92npj" [04fa9012-6eaa-490e-b639-b391b0a511b2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005055163s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-585452 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-676856 -n default-k8s-diff-port-676856
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-676856 -n default-k8s-diff-port-676856: exit status 7 (77.406634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-676856 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-676856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-676856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.1: (56.570779108s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-676856 -n default-k8s-diff-port-676856
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-585452 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-585452 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-585452 -n embed-certs-585452
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-585452 -n embed-certs-585452: exit status 2 (234.523292ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-585452 -n embed-certs-585452
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-585452 -n embed-certs-585452: exit status 2 (246.992757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-585452 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-585452 -n embed-certs-585452
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-585452 -n embed-certs-585452
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1025 10:21:47.894090  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:21:59.185532  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/old-k8s-version-019967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:21:59.228115  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/gvisor-130661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m41.24856683s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-266353 "pgrep -a kubelet"
I1025 10:22:10.973946  371331 config.go:182] Loaded profile config "auto-266353": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-266353 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z8g6p" [42b84f67-478b-4bde-8236-d41253290a1a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z8g6p" [42b84f67-478b-4bde-8236-d41253290a1a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005741169s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-095242 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-095242 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-095242 --alsologtostderr -v=1: (1.063517219s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-095242 -n newest-cni-095242
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-095242 -n newest-cni-095242: exit status 2 (290.627148ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-095242 -n newest-cni-095242
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-095242 -n newest-cni-095242: exit status 2 (278.80425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-095242 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-095242 -n newest-cni-095242
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-095242 -n newest-cni-095242
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m23.647890136s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-266353 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (107.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m47.904731896s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (107.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tnvnr" [69f4de08-f6b9-4f28-b5c1-e06cdbc4c95e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tnvnr" [69f4de08-f6b9-4f28-b5c1-e06cdbc4c95e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005493393s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tnvnr" [69f4de08-f6b9-4f28-b5c1-e06cdbc4c95e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005040874s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-676856 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-676856 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-676856 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-676856 -n default-k8s-diff-port-676856
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-676856 -n default-k8s-diff-port-676856: exit status 2 (255.060075ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-676856 -n default-k8s-diff-port-676856
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-676856 -n default-k8s-diff-port-676856: exit status 2 (227.101713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-676856 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-676856 -n default-k8s-diff-port-676856
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-676856 -n default-k8s-diff-port-676856
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (109.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m49.24192423s)
--- PASS: TestNetworkPlugins/group/bridge/Start (109.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-46ffd" [394ae561-2285-4c12-ab7c-c47824a95efa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004222822s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-266353 "pgrep -a kubelet"
I1025 10:23:35.067855  371331 config.go:182] Loaded profile config "kindnet-266353": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-266353 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n2bnd" [52947344-75de-497c-9b75-d8bf3ea72859] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n2bnd" [52947344-75de-497c-9b75-d8bf3ea72859] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004218702s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-w9dw5" [0fd5107f-e4d7-410f-9d34-686192e8bc6d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005788347s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-266353 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-266353 "pgrep -a kubelet"
I1025 10:23:48.902720  371331 config.go:182] Loaded profile config "flannel-266353": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-266353 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qssbk" [38e26036-7173-4b2d-a8d1-55cc0a62f315] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qssbk" [38e26036-7173-4b2d-a8d1-55cc0a62f315] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.005826252s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (94.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m34.33016606s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (94.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-266353 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m13.821281094s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-266353 "pgrep -a kubelet"
I1025 10:24:26.372955  371331 config.go:182] Loaded profile config "enable-default-cni-266353": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-266353 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cgx7j" [3bef97e7-2982-45f7-9506-df266f8f0797] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cgx7j" [3bef97e7-2982-45f7-9506-df266f8f0797] Running
E1025 10:24:38.391081  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:24:38.397533  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:24:38.408931  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:24:38.430437  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:24:38.471984  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:24:38.553511  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:24:38.715900  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:24:39.037820  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:24:39.680059  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:24:40.962461  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.003578629s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-266353 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-266353 "pgrep -a kubelet"
I1025 10:24:48.465269  371331 config.go:182] Loaded profile config "bridge-266353": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (22.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-266353 replace --force -f testdata/netcat-deployment.yaml
E1025 10:24:48.645863  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mqgq5" [6c10774b-f9b7-4997-9003-063ff411455a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mqgq5" [6c10774b-f9b7-4997-9003-063ff411455a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 22.005640657s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (22.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E1025 10:24:58.887429  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/no-preload-699897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m35.686521803s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-266353 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (94.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-266353 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m34.24761656s)
--- PASS: TestNetworkPlugins/group/false/Start (94.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-266353 "pgrep -a kubelet"
I1025 10:25:33.827744  371331 config.go:182] Loaded profile config "custom-flannel-266353": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-266353 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xfvpn" [9082f45a-e418-43d7-a892-01027167af47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xfvpn" [9082f45a-e418-43d7-a892-01027167af47] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003666731s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-266353 "pgrep -a kubelet"
E1025 10:25:36.550567  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/addons-442185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1025 10:25:36.554033  371331 config.go:182] Loaded profile config "kubenet-266353": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-266353 replace --force -f testdata/netcat-deployment.yaml
E1025 10:25:36.669480  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/skaffold-585177/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bjtrw" [1379b070-a589-43e6-9a3d-31d2540733a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bjtrw" [1379b070-a589-43e6-9a3d-31d2540733a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.005484813s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-266353 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-266353 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-scczw" [4404b72c-224e-481e-b69f-e1bbe9d70f3a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003807221s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-266353 "pgrep -a kubelet"
I1025 10:26:40.477961  371331 config.go:182] Loaded profile config "calico-266353": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-266353 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ftvlj" [be493669-850e-44e9-bc54-b348ad60e53f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ftvlj" [be493669-850e-44e9-bc54-b348ad60e53f] Running
E1025 10:26:47.894279  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/functional-447073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:26:51.041805  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/default-k8s-diff-port-676856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004563175s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-266353 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-266353 "pgrep -a kubelet"
I1025 10:27:00.784757  371331 config.go:182] Loaded profile config "false-266353": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-266353 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4g75w" [1d415a47-1ad2-45d4-a404-12075e371982] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4g75w" [1d415a47-1ad2-45d4-a404-12075e371982] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004868697s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-266353 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1025 10:27:11.250005  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/auto-266353/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:27:11.256429  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/auto-266353/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:27:11.267913  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/auto-266353/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:27:11.289481  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/auto-266353/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-266353 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1025 10:27:11.331679  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/auto-266353/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:27:11.413301  371331 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-367343/.minikube/profiles/auto-266353/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    

Test skip (34/344)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/PodmanEnv 0
146 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
148 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
152 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
187 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
214 TestKicCustomNetwork 0
215 TestKicExistingNetwork 0
216 TestKicCustomSubnet 0
217 TestKicStaticIP 0
249 TestChangeNoneUser 0
252 TestScheduledStopWindows 0
256 TestInsufficientStorage 0
260 TestMissingContainerUpgrade 0
268 TestStartStop/group/disable-driver-mounts 0.23
290 TestNetworkPlugins/group/cilium 3.92
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-478245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-478245
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-266353 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-266353" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-266353

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-266353" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266353"

                                                
                                                
----------------------- debugLogs end: cilium-266353 [took: 3.753595472s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-266353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-266353
--- SKIP: TestNetworkPlugins/group/cilium (3.92s)

                                                
                                    
Copied to clipboard