Test Report: Docker_Linux_containerd 22122

                    
                      022dd2780ab8206ac68153a1ee37fdbcc6da7ccd:2025-12-13:42761
                    
                

Test fail (10/420)

x
+
TestAddons/parallel/LocalPath (302.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-824997 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-824997 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Non-zero exit: kubectl --context addons-824997 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (2.991µs)
helpers_test.go:405: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:962: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-824997
helpers_test.go:244: (dbg) docker inspect addons-824997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "275fab871a34e6d25ca908deef13a56cc950401045036780a45c0af40bf42f72",
	        "Created": "2025-12-13T13:05:36.030696305Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 408010,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:05:36.06316605Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/275fab871a34e6d25ca908deef13a56cc950401045036780a45c0af40bf42f72/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/275fab871a34e6d25ca908deef13a56cc950401045036780a45c0af40bf42f72/hostname",
	        "HostsPath": "/var/lib/docker/containers/275fab871a34e6d25ca908deef13a56cc950401045036780a45c0af40bf42f72/hosts",
	        "LogPath": "/var/lib/docker/containers/275fab871a34e6d25ca908deef13a56cc950401045036780a45c0af40bf42f72/275fab871a34e6d25ca908deef13a56cc950401045036780a45c0af40bf42f72-json.log",
	        "Name": "/addons-824997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-824997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-824997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "275fab871a34e6d25ca908deef13a56cc950401045036780a45c0af40bf42f72",
	                "LowerDir": "/var/lib/docker/overlay2/5897ca4b990144ad58eb4a601b3c473cec7fb0d5b2e6b67946a57f7d40690116-init/diff:/var/lib/docker/overlay2/be5aa5e3490e76c6aea57ece480ce7168b4c08e9f5040b5571a6aeb87c809618/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5897ca4b990144ad58eb4a601b3c473cec7fb0d5b2e6b67946a57f7d40690116/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5897ca4b990144ad58eb4a601b3c473cec7fb0d5b2e6b67946a57f7d40690116/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5897ca4b990144ad58eb4a601b3c473cec7fb0d5b2e6b67946a57f7d40690116/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-824997",
	                "Source": "/var/lib/docker/volumes/addons-824997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-824997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-824997",
	                "name.minikube.sigs.k8s.io": "addons-824997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d0a8e221db5a9fb3e338967df1b36972a779b85868674797e18099e53c124212",
	            "SandboxKey": "/var/run/docker/netns/d0a8e221db5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-824997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68c447bb7172f888e802e6cf414e1c2f46b83875fe65092c6383463c59b9454",
	                    "EndpointID": "e609887212cc466e7ff9d889d9bb3735b54baa7b4261cbcaa2fc781be6ab3694",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "7a:ba:25:85:6a:10",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-824997",
	                        "275fab871a34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-824997 -n addons-824997
helpers_test.go:253: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-824997 logs -n 25: (1.067415524s)
helpers_test.go:261: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                      ARGS                                                                                                                                                                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-887901 --alsologtostderr --binary-mirror http://127.0.0.1:45211 --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-887901 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │                     │
	│ delete  │ -p binary-mirror-887901                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ binary-mirror-887901 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │ 13 Dec 25 13:05 UTC │
	│ addons  │ disable dashboard -p addons-824997                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │                     │
	│ addons  │ enable dashboard -p addons-824997                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │                     │
	│ start   │ -p addons-824997 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │ 13 Dec 25 13:07 UTC │
	│ addons  │ addons-824997 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │ 13 Dec 25 13:07 UTC │
	│ addons  │ addons-824997 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ enable headlamp -p addons-824997 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ ip      │ addons-824997 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ ssh     │ addons-824997 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ ip      │ addons-824997 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-824997                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-824997 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:09 UTC │
	│ addons  │ addons-824997 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-824997        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:05:15
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:05:15.129446  407368 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:05:15.129729  407368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:05:15.129740  407368 out.go:374] Setting ErrFile to fd 2...
	I1213 13:05:15.129747  407368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:05:15.129952  407368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:05:15.130531  407368 out.go:368] Setting JSON to false
	I1213 13:05:15.131487  407368 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6458,"bootTime":1765624657,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:05:15.131541  407368 start.go:143] virtualization: kvm guest
	I1213 13:05:15.133505  407368 out.go:179] * [addons-824997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:05:15.134695  407368 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:05:15.134692  407368 notify.go:221] Checking for updates...
	I1213 13:05:15.135803  407368 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:05:15.137107  407368 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:05:15.138341  407368 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:05:15.139429  407368 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:05:15.140830  407368 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:05:15.142093  407368 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:05:15.165789  407368 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:05:15.165935  407368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:05:15.222714  407368 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-13 13:05:15.213242059 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:05:15.222852  407368 docker.go:319] overlay module found
	I1213 13:05:15.224709  407368 out.go:179] * Using the docker driver based on user configuration
	I1213 13:05:15.226002  407368 start.go:309] selected driver: docker
	I1213 13:05:15.226021  407368 start.go:927] validating driver "docker" against <nil>
	I1213 13:05:15.226041  407368 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:05:15.226631  407368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:05:15.279598  407368 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-13 13:05:15.270524176 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:05:15.279832  407368 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:05:15.280138  407368 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:05:15.281956  407368 out.go:179] * Using Docker driver with root privileges
	I1213 13:05:15.283087  407368 cni.go:84] Creating CNI manager for ""
	I1213 13:05:15.283172  407368 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:05:15.283189  407368 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:05:15.283264  407368 start.go:353] cluster config:
	{Name:addons-824997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-824997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:05:15.284562  407368 out.go:179] * Starting "addons-824997" primary control-plane node in "addons-824997" cluster
	I1213 13:05:15.285793  407368 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 13:05:15.286933  407368 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:05:15.288132  407368 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 13:05:15.288164  407368 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1213 13:05:15.288170  407368 cache.go:65] Caching tarball of preloaded images
	I1213 13:05:15.288249  407368 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:05:15.288263  407368 preload.go:238] Found /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 13:05:15.288271  407368 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 13:05:15.288639  407368 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/config.json ...
	I1213 13:05:15.288667  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/config.json: {Name:mkdd7b80f7dfaea3b3de88d47c9b6594a08551db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:15.305418  407368 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 13:05:15.305551  407368 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 13:05:15.305569  407368 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 13:05:15.305573  407368 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 13:05:15.305582  407368 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 13:05:15.305587  407368 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1213 13:05:28.343777  407368 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1213 13:05:28.343831  407368 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:05:28.343889  407368 start.go:360] acquireMachinesLock for addons-824997: {Name:mk2cca1eed48be9fad6e28b852a594a88beaff88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:05:28.343995  407368 start.go:364] duration metric: took 83.793µs to acquireMachinesLock for "addons-824997"
	I1213 13:05:28.344021  407368 start.go:93] Provisioning new machine with config: &{Name:addons-824997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-824997 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 13:05:28.344098  407368 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:05:28.345857  407368 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 13:05:28.346103  407368 start.go:159] libmachine.API.Create for "addons-824997" (driver="docker")
	I1213 13:05:28.346137  407368 client.go:173] LocalClient.Create starting
	I1213 13:05:28.346241  407368 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem
	I1213 13:05:28.415222  407368 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem
	I1213 13:05:28.605681  407368 cli_runner.go:164] Run: docker network inspect addons-824997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:05:28.623246  407368 cli_runner.go:211] docker network inspect addons-824997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:05:28.623325  407368 network_create.go:284] running [docker network inspect addons-824997] to gather additional debugging logs...
	I1213 13:05:28.623349  407368 cli_runner.go:164] Run: docker network inspect addons-824997
	W1213 13:05:28.640695  407368 cli_runner.go:211] docker network inspect addons-824997 returned with exit code 1
	I1213 13:05:28.640741  407368 network_create.go:287] error running [docker network inspect addons-824997]: docker network inspect addons-824997: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-824997 not found
	I1213 13:05:28.640755  407368 network_create.go:289] output of [docker network inspect addons-824997]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-824997 not found
	
	** /stderr **
	I1213 13:05:28.640891  407368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:05:28.658942  407368 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d15b60}
	I1213 13:05:28.658993  407368 network_create.go:124] attempt to create docker network addons-824997 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 13:05:28.659054  407368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-824997 addons-824997
	I1213 13:05:28.706892  407368 network_create.go:108] docker network addons-824997 192.168.49.0/24 created
	I1213 13:05:28.706925  407368 kic.go:121] calculated static IP "192.168.49.2" for the "addons-824997" container
	I1213 13:05:28.706998  407368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:05:28.723362  407368 cli_runner.go:164] Run: docker volume create addons-824997 --label name.minikube.sigs.k8s.io=addons-824997 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:05:28.741928  407368 oci.go:103] Successfully created a docker volume addons-824997
	I1213 13:05:28.742016  407368 cli_runner.go:164] Run: docker run --rm --name addons-824997-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-824997 --entrypoint /usr/bin/test -v addons-824997:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:05:32.172072  407368 cli_runner.go:217] Completed: docker run --rm --name addons-824997-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-824997 --entrypoint /usr/bin/test -v addons-824997:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (3.430016737s)
	I1213 13:05:32.172103  407368 oci.go:107] Successfully prepared a docker volume addons-824997
	I1213 13:05:32.172180  407368 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 13:05:32.172196  407368 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:05:32.172271  407368 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-824997:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:05:35.961064  407368 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-824997:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.788752782s)
	I1213 13:05:35.961100  407368 kic.go:203] duration metric: took 3.788899093s to extract preloaded images to volume ...
	W1213 13:05:35.961231  407368 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:05:35.961290  407368 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:05:35.961356  407368 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:05:36.014601  407368 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-824997 --name addons-824997 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-824997 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-824997 --network addons-824997 --ip 192.168.49.2 --volume addons-824997:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 13:05:36.279173  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Running}}
	I1213 13:05:36.298821  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:36.316365  407368 cli_runner.go:164] Run: docker exec addons-824997 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:05:36.356918  407368 oci.go:144] the created container "addons-824997" has a running status.
	I1213 13:05:36.356957  407368 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa...
	I1213 13:05:36.414745  407368 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:05:36.445126  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:36.462859  407368 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:05:36.462883  407368 kic_runner.go:114] Args: [docker exec --privileged addons-824997 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:05:36.500997  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:36.521495  407368 machine.go:94] provisionDockerMachine start ...
	I1213 13:05:36.521616  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:36.542580  407368 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:36.542930  407368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1213 13:05:36.542951  407368 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:05:36.543731  407368 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39842->127.0.0.1:33152: read: connection reset by peer
	I1213 13:05:39.678469  407368 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-824997
	
	I1213 13:05:39.678511  407368 ubuntu.go:182] provisioning hostname "addons-824997"
	I1213 13:05:39.678589  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:39.696650  407368 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:39.696879  407368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1213 13:05:39.696892  407368 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-824997 && echo "addons-824997" | sudo tee /etc/hostname
	I1213 13:05:39.840458  407368 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-824997
	
	I1213 13:05:39.840553  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:39.858598  407368 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:39.858817  407368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1213 13:05:39.858842  407368 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-824997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-824997/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-824997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:05:39.993071  407368 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:05:39.993107  407368 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-401936/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-401936/.minikube}
	I1213 13:05:39.993166  407368 ubuntu.go:190] setting up certificates
	I1213 13:05:39.993179  407368 provision.go:84] configureAuth start
	I1213 13:05:39.993240  407368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-824997
	I1213 13:05:40.012493  407368 provision.go:143] copyHostCerts
	I1213 13:05:40.012574  407368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-401936/.minikube/key.pem (1675 bytes)
	I1213 13:05:40.012715  407368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-401936/.minikube/ca.pem (1078 bytes)
	I1213 13:05:40.012814  407368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-401936/.minikube/cert.pem (1123 bytes)
	I1213 13:05:40.012892  407368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-401936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca-key.pem org=jenkins.addons-824997 san=[127.0.0.1 192.168.49.2 addons-824997 localhost minikube]
	I1213 13:05:40.095784  407368 provision.go:177] copyRemoteCerts
	I1213 13:05:40.095851  407368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:05:40.095905  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:40.114554  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:40.212044  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:05:40.231280  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 13:05:40.248826  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:05:40.265860  407368 provision.go:87] duration metric: took 272.641877ms to configureAuth
	I1213 13:05:40.265894  407368 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:05:40.266072  407368 config.go:182] Loaded profile config "addons-824997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:05:40.266087  407368 machine.go:97] duration metric: took 3.744562372s to provisionDockerMachine
	I1213 13:05:40.266096  407368 client.go:176] duration metric: took 11.919950233s to LocalClient.Create
	I1213 13:05:40.266119  407368 start.go:167] duration metric: took 11.920018267s to libmachine.API.Create "addons-824997"
	I1213 13:05:40.266129  407368 start.go:293] postStartSetup for "addons-824997" (driver="docker")
	I1213 13:05:40.266138  407368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:05:40.266188  407368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:05:40.266261  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:40.285385  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:40.383432  407368 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:05:40.387191  407368 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:05:40.387221  407368 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:05:40.387233  407368 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-401936/.minikube/addons for local assets ...
	I1213 13:05:40.387291  407368 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-401936/.minikube/files for local assets ...
	I1213 13:05:40.387325  407368 start.go:296] duration metric: took 121.178266ms for postStartSetup
	I1213 13:05:40.387647  407368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-824997
	I1213 13:05:40.406373  407368 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/config.json ...
	I1213 13:05:40.406650  407368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:05:40.406693  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:40.425828  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:40.519663  407368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:05:40.524244  407368 start.go:128] duration metric: took 12.180128438s to createHost
	I1213 13:05:40.524273  407368 start.go:83] releasing machines lock for "addons-824997", held for 12.180264788s
	I1213 13:05:40.524366  407368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-824997
	I1213 13:05:40.542616  407368 ssh_runner.go:195] Run: cat /version.json
	I1213 13:05:40.542680  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:40.542688  407368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:05:40.542764  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:40.561374  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:40.562020  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:40.708113  407368 ssh_runner.go:195] Run: systemctl --version
	I1213 13:05:40.715371  407368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:05:40.720070  407368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:05:40.720145  407368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:05:40.744710  407368 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:05:40.744732  407368 start.go:496] detecting cgroup driver to use...
	I1213 13:05:40.744788  407368 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:05:40.744846  407368 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 13:05:40.758829  407368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 13:05:40.771652  407368 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:05:40.771833  407368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:05:40.787981  407368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:05:40.805841  407368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:05:40.889348  407368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:05:40.977411  407368 docker.go:234] disabling docker service ...
	I1213 13:05:40.977482  407368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:05:40.997568  407368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:05:41.010733  407368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:05:41.094936  407368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:05:41.178532  407368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:05:41.191093  407368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:05:41.205491  407368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 13:05:41.216134  407368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 13:05:41.224950  407368 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1213 13:05:41.225010  407368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1213 13:05:41.233575  407368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 13:05:41.242308  407368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 13:05:41.251061  407368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 13:05:41.259675  407368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:05:41.267810  407368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 13:05:41.276527  407368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 13:05:41.285138  407368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 13:05:41.293963  407368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:05:41.301404  407368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:05:41.308672  407368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:05:41.385139  407368 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 13:05:41.487470  407368 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 13:05:41.487553  407368 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 13:05:41.491590  407368 start.go:564] Will wait 60s for crictl version
	I1213 13:05:41.491659  407368 ssh_runner.go:195] Run: which crictl
	I1213 13:05:41.495327  407368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:05:41.519365  407368 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 13:05:41.519451  407368 ssh_runner.go:195] Run: containerd --version
	I1213 13:05:41.540059  407368 ssh_runner.go:195] Run: containerd --version
	I1213 13:05:41.563569  407368 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1213 13:05:41.564909  407368 cli_runner.go:164] Run: docker network inspect addons-824997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:05:41.583274  407368 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 13:05:41.587525  407368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:05:41.598277  407368 kubeadm.go:884] updating cluster {Name:addons-824997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-824997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:05:41.598420  407368 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 13:05:41.598537  407368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:05:41.622882  407368 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 13:05:41.622906  407368 containerd.go:534] Images already preloaded, skipping extraction
	I1213 13:05:41.622954  407368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:05:41.647871  407368 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 13:05:41.647892  407368 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:05:41.647899  407368 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 containerd true true} ...
	I1213 13:05:41.648011  407368 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-824997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-824997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:05:41.648064  407368 ssh_runner.go:195] Run: sudo crictl info
	I1213 13:05:41.674100  407368 cni.go:84] Creating CNI manager for ""
	I1213 13:05:41.674123  407368 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:05:41.674142  407368 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:05:41.674164  407368 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-824997 NodeName:addons-824997 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:05:41.674301  407368 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-824997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:05:41.674384  407368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:05:41.682866  407368 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:05:41.682926  407368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:05:41.691024  407368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1213 13:05:41.704127  407368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:05:41.719512  407368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 13:05:41.732250  407368 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:05:41.735964  407368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:05:41.746041  407368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:05:41.826503  407368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:05:41.849753  407368 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997 for IP: 192.168.49.2
	I1213 13:05:41.849781  407368 certs.go:195] generating shared ca certs ...
	I1213 13:05:41.849802  407368 certs.go:227] acquiring lock for ca certs: {Name:mk638ad0c55891f03a1600a7ef1d632862f1d7c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:41.849945  407368 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.key
	I1213 13:05:41.948682  407368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt ...
	I1213 13:05:41.948717  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt: {Name:mka1efab3e3f2fab014d028f53e4a3c6df29cfc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:41.948934  407368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-401936/.minikube/ca.key ...
	I1213 13:05:41.948951  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/ca.key: {Name:mkb1af28460e41793895cf7eaf4ad9510ae4ba61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:41.949065  407368 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.key
	I1213 13:05:42.116922  407368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.crt ...
	I1213 13:05:42.116955  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.crt: {Name:mka4a727392bd80f319b8913aba2de529948291d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:42.117170  407368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.key ...
	I1213 13:05:42.117187  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.key: {Name:mk6505600efbf1c0702a56fc1aaad304572ef725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:42.117298  407368 certs.go:257] generating profile certs ...
	I1213 13:05:42.117389  407368 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.key
	I1213 13:05:42.117405  407368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt with IP's: []
	I1213 13:05:42.149032  407368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt ...
	I1213 13:05:42.149059  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: {Name:mk2af4e915b93db4183555665392282d4b1c4a1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:42.149251  407368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.key ...
	I1213 13:05:42.149267  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.key: {Name:mkb62cc98365581332ceb5df0499296adf83e348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:42.149391  407368 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.key.63d434d9
	I1213 13:05:42.149412  407368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.crt.63d434d9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 13:05:42.171262  407368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.crt.63d434d9 ...
	I1213 13:05:42.171291  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.crt.63d434d9: {Name:mkad21629714a2f53e74200f24ed5b5e9beb3487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:42.171478  407368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.key.63d434d9 ...
	I1213 13:05:42.171495  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.key.63d434d9: {Name:mkfcde7c4be2b78f654736473a3630c61ef15dc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:42.171630  407368 certs.go:382] copying /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.crt.63d434d9 -> /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.crt
	I1213 13:05:42.171729  407368 certs.go:386] copying /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.key.63d434d9 -> /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.key
	I1213 13:05:42.171778  407368 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/proxy-client.key
	I1213 13:05:42.171802  407368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/proxy-client.crt with IP's: []
	I1213 13:05:42.201465  407368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/proxy-client.crt ...
	I1213 13:05:42.201495  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/proxy-client.crt: {Name:mk41b02bff5e8a9a31576267d2f32ad7ee11e95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:42.201688  407368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/proxy-client.key ...
	I1213 13:05:42.201705  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/proxy-client.key: {Name:mk49f70b59f39a186ecaa0cfd2c7e6217b4f9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:42.201950  407368 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:05:42.201994  407368 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:05:42.202020  407368 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:05:42.202045  407368 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/key.pem (1675 bytes)
	I1213 13:05:42.202666  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:05:42.221119  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 13:05:42.238545  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:05:42.256758  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 13:05:42.274290  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 13:05:42.292636  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 13:05:42.311194  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:05:42.329122  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 13:05:42.346748  407368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:05:42.367472  407368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:05:42.380243  407368 ssh_runner.go:195] Run: openssl version
	I1213 13:05:42.386170  407368 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:42.393426  407368 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:05:42.403962  407368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:42.407836  407368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:42.407897  407368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:42.441706  407368 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:05:42.449776  407368 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:05:42.457499  407368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:05:42.461279  407368 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:05:42.461366  407368 kubeadm.go:401] StartCluster: {Name:addons-824997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-824997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:05:42.461454  407368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 13:05:42.461506  407368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:05:42.488985  407368 cri.go:89] found id: ""
	I1213 13:05:42.489057  407368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:05:42.497487  407368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:05:42.505556  407368 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:05:42.505625  407368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:05:42.513661  407368 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:05:42.513685  407368 kubeadm.go:158] found existing configuration files:
	
	I1213 13:05:42.513743  407368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:05:42.521559  407368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:05:42.521611  407368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:05:42.528851  407368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:05:42.536695  407368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:05:42.536751  407368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:05:42.544328  407368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:05:42.552015  407368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:05:42.552070  407368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:05:42.559578  407368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:05:42.567500  407368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:05:42.567567  407368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:05:42.574936  407368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:05:42.611549  407368 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 13:05:42.611609  407368 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:05:42.643091  407368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:05:42.643197  407368 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:05:42.643247  407368 kubeadm.go:319] OS: Linux
	I1213 13:05:42.643332  407368 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:05:42.643418  407368 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:05:42.643511  407368 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:05:42.643595  407368 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:05:42.643689  407368 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:05:42.643770  407368 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:05:42.643835  407368 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:05:42.643897  407368 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:05:42.701881  407368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:05:42.702030  407368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:05:42.702192  407368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:05:42.707538  407368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:05:42.709531  407368 out.go:252]   - Generating certificates and keys ...
	I1213 13:05:42.709608  407368 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:05:42.709672  407368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:05:43.022661  407368 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:05:43.237971  407368 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:05:43.605100  407368 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:05:43.745016  407368 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:05:44.043555  407368 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:05:44.043702  407368 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-824997 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 13:05:44.270960  407368 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:05:44.271138  407368 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-824997 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 13:05:44.321522  407368 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:05:44.696633  407368 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:05:44.808162  407368 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:05:44.808236  407368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:05:44.915605  407368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:05:45.236815  407368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:05:45.556082  407368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:05:45.701745  407368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:05:45.925766  407368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:05:45.926298  407368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:05:45.930234  407368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:05:45.931818  407368 out.go:252]   - Booting up control plane ...
	I1213 13:05:45.931906  407368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:05:45.931973  407368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:05:45.932566  407368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:05:45.947298  407368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:05:45.947495  407368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:05:45.954274  407368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:05:45.954510  407368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:05:45.954557  407368 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:05:46.056851  407368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:05:46.057000  407368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 13:05:46.558867  407368 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.083246ms
	I1213 13:05:46.561689  407368 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:05:46.561821  407368 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 13:05:46.561902  407368 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:05:46.561978  407368 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:05:48.169366  407368 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.606565481s
	I1213 13:05:48.620433  407368 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.058710822s
	I1213 13:05:50.063728  407368 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501951775s
	I1213 13:05:50.081454  407368 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 13:05:50.091072  407368 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 13:05:50.099694  407368 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 13:05:50.099991  407368 kubeadm.go:319] [mark-control-plane] Marking the node addons-824997 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 13:05:50.108438  407368 kubeadm.go:319] [bootstrap-token] Using token: lgaeun.dx5x6s4414vidk1x
	I1213 13:05:50.109814  407368 out.go:252]   - Configuring RBAC rules ...
	I1213 13:05:50.109945  407368 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 13:05:50.114147  407368 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 13:05:50.119633  407368 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 13:05:50.122018  407368 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 13:05:50.124403  407368 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 13:05:50.126801  407368 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 13:05:50.468895  407368 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 13:05:50.882612  407368 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 13:05:51.469407  407368 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 13:05:51.470198  407368 kubeadm.go:319] 
	I1213 13:05:51.470295  407368 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 13:05:51.470305  407368 kubeadm.go:319] 
	I1213 13:05:51.470433  407368 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 13:05:51.470443  407368 kubeadm.go:319] 
	I1213 13:05:51.470475  407368 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 13:05:51.470559  407368 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 13:05:51.470664  407368 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 13:05:51.470697  407368 kubeadm.go:319] 
	I1213 13:05:51.470766  407368 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 13:05:51.470774  407368 kubeadm.go:319] 
	I1213 13:05:51.470840  407368 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 13:05:51.470849  407368 kubeadm.go:319] 
	I1213 13:05:51.470923  407368 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 13:05:51.471021  407368 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 13:05:51.471132  407368 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 13:05:51.471142  407368 kubeadm.go:319] 
	I1213 13:05:51.471262  407368 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 13:05:51.471409  407368 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 13:05:51.471417  407368 kubeadm.go:319] 
	I1213 13:05:51.471524  407368 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lgaeun.dx5x6s4414vidk1x \
	I1213 13:05:51.471690  407368 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:05d8a85c1b2761169b95534d93c81e4c18e60369e201d73b5567ad02426dd2e0 \
	I1213 13:05:51.471728  407368 kubeadm.go:319] 	--control-plane 
	I1213 13:05:51.471733  407368 kubeadm.go:319] 
	I1213 13:05:51.471821  407368 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 13:05:51.471829  407368 kubeadm.go:319] 
	I1213 13:05:51.471904  407368 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lgaeun.dx5x6s4414vidk1x \
	I1213 13:05:51.472006  407368 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:05d8a85c1b2761169b95534d93c81e4c18e60369e201d73b5567ad02426dd2e0 
	I1213 13:05:51.474152  407368 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:05:51.474256  407368 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:05:51.474282  407368 cni.go:84] Creating CNI manager for ""
	I1213 13:05:51.474292  407368 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:05:51.476242  407368 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 13:05:51.477502  407368 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 13:05:51.482045  407368 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 13:05:51.482073  407368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 13:05:51.495148  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 13:05:51.699402  407368 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 13:05:51.699470  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:51.699514  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-824997 minikube.k8s.io/updated_at=2025_12_13T13_05_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=addons-824997 minikube.k8s.io/primary=true
	I1213 13:05:51.709478  407368 ops.go:34] apiserver oom_adj: -16
	I1213 13:05:51.784490  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:52.284865  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:52.785447  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:53.285462  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:53.785417  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:54.285266  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:54.784681  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:55.285493  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:55.784568  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:56.284670  407368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:56.349025  407368 kubeadm.go:1114] duration metric: took 4.649608319s to wait for elevateKubeSystemPrivileges
	I1213 13:05:56.349068  407368 kubeadm.go:403] duration metric: took 13.887708277s to StartCluster
	I1213 13:05:56.349089  407368 settings.go:142] acquiring lock: {Name:mk71afd6e9758cc52371589a74f73214557044d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:56.349198  407368 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:05:56.349680  407368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/kubeconfig: {Name:mk743b5761bd946614fa12c7aa179660c36f36c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:56.349887  407368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 13:05:56.349897  407368 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 13:05:56.349962  407368 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 13:05:56.350090  407368 addons.go:70] Setting yakd=true in profile "addons-824997"
	I1213 13:05:56.350104  407368 addons.go:70] Setting inspektor-gadget=true in profile "addons-824997"
	I1213 13:05:56.350123  407368 addons.go:239] Setting addon yakd=true in "addons-824997"
	I1213 13:05:56.350126  407368 addons.go:239] Setting addon inspektor-gadget=true in "addons-824997"
	I1213 13:05:56.350143  407368 addons.go:70] Setting default-storageclass=true in profile "addons-824997"
	I1213 13:05:56.350152  407368 addons.go:70] Setting ingress=true in profile "addons-824997"
	I1213 13:05:56.350166  407368 addons.go:70] Setting metrics-server=true in profile "addons-824997"
	I1213 13:05:56.350167  407368 addons.go:70] Setting ingress-dns=true in profile "addons-824997"
	I1213 13:05:56.350174  407368 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-824997"
	I1213 13:05:56.350178  407368 addons.go:239] Setting addon metrics-server=true in "addons-824997"
	I1213 13:05:56.350184  407368 addons.go:239] Setting addon ingress-dns=true in "addons-824997"
	I1213 13:05:56.350187  407368 addons.go:70] Setting gcp-auth=true in profile "addons-824997"
	I1213 13:05:56.350158  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.350195  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.350196  407368 config.go:182] Loaded profile config "addons-824997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:05:56.350188  407368 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-824997"
	I1213 13:05:56.350158  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.350156  407368 addons.go:70] Setting cloud-spanner=true in profile "addons-824997"
	I1213 13:05:56.350222  407368 addons.go:239] Setting addon cloud-spanner=true in "addons-824997"
	I1213 13:05:56.350233  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.350237  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.350204  407368 mustload.go:66] Loading cluster: addons-824997
	I1213 13:05:56.350263  407368 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-824997"
	I1213 13:05:56.350340  407368 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-824997"
	I1213 13:05:56.350177  407368 addons.go:239] Setting addon ingress=true in "addons-824997"
	I1213 13:05:56.350364  407368 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-824997"
	I1213 13:05:56.350368  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.350385  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.350389  407368 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-824997"
	I1213 13:05:56.350415  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.350440  407368 config.go:182] Loaded profile config "addons-824997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:05:56.350711  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.350760  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.350760  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.350775  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.350799  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.350803  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.350839  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.350848  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.351267  407368 addons.go:70] Setting volcano=true in profile "addons-824997"
	I1213 13:05:56.351284  407368 addons.go:239] Setting addon volcano=true in "addons-824997"
	I1213 13:05:56.351304  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.351662  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.362047  407368 out.go:179] * Verifying Kubernetes components...
	I1213 13:05:56.350210  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.363986  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.350353  407368 addons.go:70] Setting storage-provisioner=true in profile "addons-824997"
	I1213 13:05:56.364815  407368 addons.go:239] Setting addon storage-provisioner=true in "addons-824997"
	I1213 13:05:56.364866  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.365370  407368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:05:56.365427  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.367136  407368 addons.go:70] Setting volumesnapshots=true in profile "addons-824997"
	I1213 13:05:56.367158  407368 addons.go:239] Setting addon volumesnapshots=true in "addons-824997"
	I1213 13:05:56.367201  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.367729  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.350203  407368 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-824997"
	I1213 13:05:56.368463  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.368669  407368 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-824997"
	I1213 13:05:56.368692  407368 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-824997"
	I1213 13:05:56.369040  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.371870  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.372464  407368 addons.go:70] Setting registry=true in profile "addons-824997"
	I1213 13:05:56.372488  407368 addons.go:239] Setting addon registry=true in "addons-824997"
	I1213 13:05:56.372531  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.373183  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.375139  407368 addons.go:70] Setting registry-creds=true in profile "addons-824997"
	I1213 13:05:56.375169  407368 addons.go:239] Setting addon registry-creds=true in "addons-824997"
	I1213 13:05:56.375211  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.375709  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.409781  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.411006  407368 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 13:05:56.412248  407368 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 13:05:56.412274  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 13:05:56.412346  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.419200  407368 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1213 13:05:56.419289  407368 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 13:05:56.420733  407368 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 13:05:56.420756  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 13:05:56.420836  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.420996  407368 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1213 13:05:56.422187  407368 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1213 13:05:56.436269  407368 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 13:05:56.436373  407368 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 13:05:56.436268  407368 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 13:05:56.437977  407368 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 13:05:56.438005  407368 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 13:05:56.438081  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.438830  407368 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 13:05:56.438851  407368 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 13:05:56.438908  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.439136  407368 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 13:05:56.439149  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 13:05:56.439193  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.440680  407368 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1213 13:05:56.440705  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1213 13:05:56.440755  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.444400  407368 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:05:56.446214  407368 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:05:56.446231  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:05:56.446282  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.461162  407368 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 13:05:56.466035  407368 addons.go:239] Setting addon default-storageclass=true in "addons-824997"
	I1213 13:05:56.466084  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.468276  407368 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-824997"
	I1213 13:05:56.468332  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:05:56.468967  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.470393  407368 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 13:05:56.470799  407368 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 13:05:56.470890  407368 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 13:05:56.470904  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 13:05:56.471027  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.471979  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:05:56.474024  407368 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 13:05:56.474040  407368 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 13:05:56.474217  407368 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:05:56.475524  407368 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 13:05:56.475559  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 13:05:56.475638  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.475516  407368 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 13:05:56.475880  407368 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:05:56.482338  407368 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 13:05:56.482362  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 13:05:56.482431  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.485433  407368 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 13:05:56.489312  407368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 13:05:56.489899  407368 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 13:05:56.489911  407368 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 13:05:56.490101  407368 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 13:05:56.491374  407368 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 13:05:56.491495  407368 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 13:05:56.491511  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 13:05:56.491575  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.491936  407368 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 13:05:56.491952  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 13:05:56.492009  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.493826  407368 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 13:05:56.496559  407368 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 13:05:56.496895  407368 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 13:05:56.496915  407368 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 13:05:56.497097  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.499098  407368 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 13:05:56.500430  407368 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 13:05:56.502422  407368 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 13:05:56.502446  407368 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 13:05:56.502516  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.517454  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.520700  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.523442  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.523892  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.527195  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.534279  407368 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 13:05:56.535663  407368 out.go:179]   - Using image docker.io/busybox:stable
	I1213 13:05:56.537202  407368 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 13:05:56.537220  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 13:05:56.537287  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.539489  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.543881  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.563405  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.564809  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.574537  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.585857  407368 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:05:56.585884  407368 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:05:56.585953  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:05:56.587543  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.589293  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.598116  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.603552  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.604502  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.615358  407368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:05:56.624710  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:05:56.729286  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 13:05:56.741369  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1213 13:05:56.741775  407368 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 13:05:56.741835  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 13:05:56.753858  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:05:56.759279  407368 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 13:05:56.759310  407368 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 13:05:56.761192  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 13:05:56.766690  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 13:05:56.769965  407368 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 13:05:56.769989  407368 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 13:05:56.781771  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 13:05:56.782662  407368 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 13:05:56.782688  407368 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 13:05:56.788076  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:05:56.793494  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 13:05:56.800276  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 13:05:56.804523  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 13:05:56.806550  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 13:05:56.808653  407368 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 13:05:56.808720  407368 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 13:05:56.809544  407368 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 13:05:56.809629  407368 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 13:05:56.812677  407368 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 13:05:56.812695  407368 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 13:05:56.832508  407368 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 13:05:56.832598  407368 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 13:05:56.837381  407368 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 13:05:56.837408  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 13:05:56.851870  407368 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 13:05:56.851898  407368 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 13:05:56.871881  407368 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 13:05:56.871910  407368 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 13:05:56.875453  407368 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 13:05:56.875477  407368 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 13:05:56.896857  407368 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 13:05:56.896891  407368 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 13:05:56.898137  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 13:05:56.924352  407368 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 13:05:56.924380  407368 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 13:05:56.943439  407368 node_ready.go:35] waiting up to 6m0s for node "addons-824997" to be "Ready" ...
	I1213 13:05:56.944295  407368 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 13:05:56.965575  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 13:05:56.972710  407368 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 13:05:56.972799  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 13:05:56.976704  407368 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 13:05:56.976791  407368 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 13:05:56.987505  407368 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:05:56.987527  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 13:05:57.050170  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 13:05:57.050286  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:05:57.102627  407368 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 13:05:57.102658  407368 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 13:05:57.183429  407368 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 13:05:57.183527  407368 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 13:05:57.270449  407368 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 13:05:57.270477  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 13:05:57.343739  407368 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 13:05:57.343817  407368 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 13:05:57.414377  407368 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 13:05:57.414405  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 13:05:57.443534  407368 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 13:05:57.443566  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 13:05:57.451722  407368 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-824997" context rescaled to 1 replicas
	I1213 13:05:57.486139  407368 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 13:05:57.486172  407368 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 13:05:57.511019  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 13:05:57.818291  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.088806761s)
	W1213 13:05:58.947239  407368 node_ready.go:57] node "addons-824997" has "Ready":"False" status (will retry)
	I1213 13:05:58.949336  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.207913679s)
	I1213 13:05:58.949438  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.195544534s)
	I1213 13:05:58.949481  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.188265116s)
	I1213 13:05:58.949521  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.182809575s)
	I1213 13:05:58.949657  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.16785605s)
	I1213 13:05:58.949682  407368 addons.go:495] Verifying addon ingress=true in "addons-824997"
	I1213 13:05:58.949706  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.161595601s)
	I1213 13:05:58.949716  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.156198963s)
	I1213 13:05:58.949825  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.145225625s)
	I1213 13:05:58.949793  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (2.149493313s)
	I1213 13:05:58.949879  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.143260407s)
	I1213 13:05:58.949929  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.051760094s)
	I1213 13:05:58.949956  407368 addons.go:495] Verifying addon registry=true in "addons-824997"
	I1213 13:05:58.950053  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.899817587s)
	I1213 13:05:58.950028  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.984419217s)
	I1213 13:05:58.950114  407368 addons.go:495] Verifying addon metrics-server=true in "addons-824997"
	I1213 13:05:58.950205  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.899839201s)
	W1213 13:05:58.950244  407368 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 13:05:58.950281  407368 retry.go:31] will retry after 292.400583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 13:05:58.950445  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.439383941s)
	I1213 13:05:58.950468  407368 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-824997"
	I1213 13:05:58.952984  407368 out.go:179] * Verifying registry addon...
	I1213 13:05:58.952984  407368 out.go:179] * Verifying ingress addon...
	I1213 13:05:58.953067  407368 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-824997 service yakd-dashboard -n yakd-dashboard
	
	I1213 13:05:58.954830  407368 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 13:05:58.955488  407368 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 13:05:58.955947  407368 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 13:05:58.957489  407368 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	W1213 13:05:58.968836  407368 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1213 13:05:58.970483  407368 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 13:05:58.970507  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:58.970691  407368 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 13:05:58.970711  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:58.970697  407368 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 13:05:58.970726  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:59.243753  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:05:59.459011  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:59.459173  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:59.460537  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:59.959428  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:59.959565  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:59.962625  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:00.459491  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:00.459783  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:00.459847  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:00.947618  407368 node_ready.go:57] node "addons-824997" has "Ready":"False" status (will retry)
	I1213 13:06:00.959761  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:00.959819  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:00.959824  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:01.458793  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:01.459031  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:01.460390  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:01.795054  407368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.551251429s)
	I1213 13:06:01.960052  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:01.960247  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:01.960331  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:02.459670  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:02.459858  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:02.460105  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:02.959440  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:02.959622  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:02.959717  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 13:06:03.446269  407368 node_ready.go:57] node "addons-824997" has "Ready":"False" status (will retry)
	I1213 13:06:03.459171  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:03.459411  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:03.460455  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:03.960104  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:03.960238  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:03.960303  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:04.042243  407368 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 13:06:04.042335  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:06:04.062923  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:06:04.166740  407368 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 13:06:04.180356  407368 addons.go:239] Setting addon gcp-auth=true in "addons-824997"
	I1213 13:06:04.180416  407368 host.go:66] Checking if "addons-824997" exists ...
	I1213 13:06:04.180868  407368 cli_runner.go:164] Run: docker container inspect addons-824997 --format={{.State.Status}}
	I1213 13:06:04.199017  407368 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 13:06:04.199074  407368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-824997
	I1213 13:06:04.218017  407368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/addons-824997/id_rsa Username:docker}
	I1213 13:06:04.312311  407368 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:06:04.313629  407368 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 13:06:04.314897  407368 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 13:06:04.314920  407368 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 13:06:04.328395  407368 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 13:06:04.328421  407368 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 13:06:04.342368  407368 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 13:06:04.342393  407368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 13:06:04.355369  407368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 13:06:04.460306  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:04.461153  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:04.461189  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:04.668921  407368 addons.go:495] Verifying addon gcp-auth=true in "addons-824997"
	I1213 13:06:04.670535  407368 out.go:179] * Verifying gcp-auth addon...
	I1213 13:06:04.672599  407368 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 13:06:04.675472  407368 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 13:06:04.675493  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:04.959591  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:04.959680  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:04.959722  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:05.175948  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 13:06:05.446769  407368 node_ready.go:57] node "addons-824997" has "Ready":"False" status (will retry)
	I1213 13:06:05.459800  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:05.460007  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:05.460091  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:05.675844  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:05.960083  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:05.960116  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:05.960408  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:06.176289  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:06.459445  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:06.459857  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:06.459890  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:06.675843  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:06.959865  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:06.959894  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:06.960308  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:07.176189  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 13:06:07.447046  407368 node_ready.go:57] node "addons-824997" has "Ready":"False" status (will retry)
	I1213 13:06:07.460071  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:07.460120  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:07.460430  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:07.676360  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:07.959262  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:07.959489  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:07.960521  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:08.176627  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:08.445735  407368 node_ready.go:49] node "addons-824997" is "Ready"
	I1213 13:06:08.445780  407368 node_ready.go:38] duration metric: took 11.502301943s for node "addons-824997" to be "Ready" ...
	I1213 13:06:08.445801  407368 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:06:08.445885  407368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:06:08.465903  407368 api_server.go:72] duration metric: took 12.115977225s to wait for apiserver process to appear ...
	I1213 13:06:08.465952  407368 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:06:08.465977  407368 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 13:06:08.472907  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:08.474497  407368 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 13:06:08.474518  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:08.474851  407368 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 13:06:08.474874  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:08.475645  407368 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 13:06:08.478347  407368 api_server.go:141] control plane version: v1.34.2
	I1213 13:06:08.478378  407368 api_server.go:131] duration metric: took 12.417196ms to wait for apiserver health ...
	I1213 13:06:08.478391  407368 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:06:08.577071  407368 system_pods.go:59] 20 kube-system pods found
	I1213 13:06:08.577184  407368 system_pods.go:61] "amd-gpu-device-plugin-dmbzs" [279d1498-a14f-4451-817b-f77e32c0940f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:08.577218  407368 system_pods.go:61] "coredns-66bc5c9577-9s6qd" [0b34457c-f145-4879-a731-40e7dbbfa078] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:08.577241  407368 system_pods.go:61] "csi-hostpath-attacher-0" [a17d16f8-2f00-4320-838f-384ee0b9a07c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:08.577262  407368 system_pods.go:61] "csi-hostpath-resizer-0" [1a4be71b-60b1-418d-9dc4-0d44a44be3ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:08.577290  407368 system_pods.go:61] "csi-hostpathplugin-s7gmj" [d156b5d1-c702-41b9-90fb-6557fd9e680d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:08.577440  407368 system_pods.go:61] "etcd-addons-824997" [88965af2-50d1-43a1-a088-9e7ea0d37438] Running
	I1213 13:06:08.577459  407368 system_pods.go:61] "kindnet-5x6hz" [5b91085a-7daf-4698-937e-59ee22954cde] Running
	I1213 13:06:08.577465  407368 system_pods.go:61] "kube-apiserver-addons-824997" [f5928f87-14d9-46a4-8d5a-8ba16d105ce0] Running
	I1213 13:06:08.577470  407368 system_pods.go:61] "kube-controller-manager-addons-824997" [1a5cb4a1-3733-40fb-b251-37fc23a92a63] Running
	I1213 13:06:08.577479  407368 system_pods.go:61] "kube-ingress-dns-minikube" [44750bbe-963d-418d-951d-edafb4cedd97] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:08.577484  407368 system_pods.go:61] "kube-proxy-98lpp" [14fad822-0002-4720-8b7d-bc0c91ed9b30] Running
	I1213 13:06:08.577489  407368 system_pods.go:61] "kube-scheduler-addons-824997" [299f4dbc-946f-43f2-a644-1c5735baebd4] Running
	I1213 13:06:08.577496  407368 system_pods.go:61] "metrics-server-85b7d694d7-7q9sx" [ae8558d5-7777-4f0b-93db-322b4e89148f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:08.577509  407368 system_pods.go:61] "nvidia-device-plugin-daemonset-vq87l" [e2a48ba5-a70e-40a2-add1-029a7bf0ef4c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:08.577520  407368 system_pods.go:61] "registry-6b586f9694-hh7xr" [6e260024-f4a8-4789-a4ce-0e6144434b7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:08.577528  407368 system_pods.go:61] "registry-creds-764b6fb674-pl9zq" [6f61ff7e-5bac-4fc7-945f-a1393d58c084] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:08.577536  407368 system_pods.go:61] "registry-proxy-99sw8" [ae5fdbb1-065d-40ba-9f98-fb248ffde339] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:08.577543  407368 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5cwwk" [47b836cb-86d6-424a-8e11-a53c79e74f88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:08.577555  407368 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fksqt" [2c56a756-f631-43ae-a6a8-e6fbb92ec911] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:08.577563  407368 system_pods.go:61] "storage-provisioner" [908235ee-d117-47df-9462-3e85c24ebf10] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:08.577572  407368 system_pods.go:74] duration metric: took 99.173311ms to wait for pod list to return data ...
	I1213 13:06:08.577582  407368 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:06:08.580682  407368 default_sa.go:45] found service account: "default"
	I1213 13:06:08.580702  407368 default_sa.go:55] duration metric: took 3.113235ms for default service account to be created ...
	I1213 13:06:08.580711  407368 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:06:08.678773  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:08.679991  407368 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:08.680036  407368 system_pods.go:89] "amd-gpu-device-plugin-dmbzs" [279d1498-a14f-4451-817b-f77e32c0940f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:08.680062  407368 system_pods.go:89] "coredns-66bc5c9577-9s6qd" [0b34457c-f145-4879-a731-40e7dbbfa078] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:08.680078  407368 system_pods.go:89] "csi-hostpath-attacher-0" [a17d16f8-2f00-4320-838f-384ee0b9a07c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:08.680102  407368 system_pods.go:89] "csi-hostpath-resizer-0" [1a4be71b-60b1-418d-9dc4-0d44a44be3ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:08.680126  407368 system_pods.go:89] "csi-hostpathplugin-s7gmj" [d156b5d1-c702-41b9-90fb-6557fd9e680d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:08.680140  407368 system_pods.go:89] "etcd-addons-824997" [88965af2-50d1-43a1-a088-9e7ea0d37438] Running
	I1213 13:06:08.680147  407368 system_pods.go:89] "kindnet-5x6hz" [5b91085a-7daf-4698-937e-59ee22954cde] Running
	I1213 13:06:08.680153  407368 system_pods.go:89] "kube-apiserver-addons-824997" [f5928f87-14d9-46a4-8d5a-8ba16d105ce0] Running
	I1213 13:06:08.680159  407368 system_pods.go:89] "kube-controller-manager-addons-824997" [1a5cb4a1-3733-40fb-b251-37fc23a92a63] Running
	I1213 13:06:08.680166  407368 system_pods.go:89] "kube-ingress-dns-minikube" [44750bbe-963d-418d-951d-edafb4cedd97] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:08.680171  407368 system_pods.go:89] "kube-proxy-98lpp" [14fad822-0002-4720-8b7d-bc0c91ed9b30] Running
	I1213 13:06:08.680183  407368 system_pods.go:89] "kube-scheduler-addons-824997" [299f4dbc-946f-43f2-a644-1c5735baebd4] Running
	I1213 13:06:08.680189  407368 system_pods.go:89] "metrics-server-85b7d694d7-7q9sx" [ae8558d5-7777-4f0b-93db-322b4e89148f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:08.680196  407368 system_pods.go:89] "nvidia-device-plugin-daemonset-vq87l" [e2a48ba5-a70e-40a2-add1-029a7bf0ef4c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:08.680203  407368 system_pods.go:89] "registry-6b586f9694-hh7xr" [6e260024-f4a8-4789-a4ce-0e6144434b7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:08.680228  407368 system_pods.go:89] "registry-creds-764b6fb674-pl9zq" [6f61ff7e-5bac-4fc7-945f-a1393d58c084] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:08.680241  407368 system_pods.go:89] "registry-proxy-99sw8" [ae5fdbb1-065d-40ba-9f98-fb248ffde339] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:08.680248  407368 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cwwk" [47b836cb-86d6-424a-8e11-a53c79e74f88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:08.680262  407368 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fksqt" [2c56a756-f631-43ae-a6a8-e6fbb92ec911] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:08.680278  407368 system_pods.go:89] "storage-provisioner" [908235ee-d117-47df-9462-3e85c24ebf10] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:08.680300  407368 retry.go:31] will retry after 247.632898ms: missing components: kube-dns
	I1213 13:06:08.942158  407368 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:08.942206  407368 system_pods.go:89] "amd-gpu-device-plugin-dmbzs" [279d1498-a14f-4451-817b-f77e32c0940f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:08.942225  407368 system_pods.go:89] "coredns-66bc5c9577-9s6qd" [0b34457c-f145-4879-a731-40e7dbbfa078] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:08.942244  407368 system_pods.go:89] "csi-hostpath-attacher-0" [a17d16f8-2f00-4320-838f-384ee0b9a07c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:08.942253  407368 system_pods.go:89] "csi-hostpath-resizer-0" [1a4be71b-60b1-418d-9dc4-0d44a44be3ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:08.942267  407368 system_pods.go:89] "csi-hostpathplugin-s7gmj" [d156b5d1-c702-41b9-90fb-6557fd9e680d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:08.942273  407368 system_pods.go:89] "etcd-addons-824997" [88965af2-50d1-43a1-a088-9e7ea0d37438] Running
	I1213 13:06:08.942281  407368 system_pods.go:89] "kindnet-5x6hz" [5b91085a-7daf-4698-937e-59ee22954cde] Running
	I1213 13:06:08.942287  407368 system_pods.go:89] "kube-apiserver-addons-824997" [f5928f87-14d9-46a4-8d5a-8ba16d105ce0] Running
	I1213 13:06:08.942297  407368 system_pods.go:89] "kube-controller-manager-addons-824997" [1a5cb4a1-3733-40fb-b251-37fc23a92a63] Running
	I1213 13:06:08.942306  407368 system_pods.go:89] "kube-ingress-dns-minikube" [44750bbe-963d-418d-951d-edafb4cedd97] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:08.942333  407368 system_pods.go:89] "kube-proxy-98lpp" [14fad822-0002-4720-8b7d-bc0c91ed9b30] Running
	I1213 13:06:08.942341  407368 system_pods.go:89] "kube-scheduler-addons-824997" [299f4dbc-946f-43f2-a644-1c5735baebd4] Running
	I1213 13:06:08.942354  407368 system_pods.go:89] "metrics-server-85b7d694d7-7q9sx" [ae8558d5-7777-4f0b-93db-322b4e89148f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:08.942363  407368 system_pods.go:89] "nvidia-device-plugin-daemonset-vq87l" [e2a48ba5-a70e-40a2-add1-029a7bf0ef4c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:08.942375  407368 system_pods.go:89] "registry-6b586f9694-hh7xr" [6e260024-f4a8-4789-a4ce-0e6144434b7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:08.942387  407368 system_pods.go:89] "registry-creds-764b6fb674-pl9zq" [6f61ff7e-5bac-4fc7-945f-a1393d58c084] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:08.942540  407368 system_pods.go:89] "registry-proxy-99sw8" [ae5fdbb1-065d-40ba-9f98-fb248ffde339] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:08.942568  407368 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cwwk" [47b836cb-86d6-424a-8e11-a53c79e74f88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:08.942590  407368 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fksqt" [2c56a756-f631-43ae-a6a8-e6fbb92ec911] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:08.942603  407368 system_pods.go:89] "storage-provisioner" [908235ee-d117-47df-9462-3e85c24ebf10] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:08.942631  407368 retry.go:31] will retry after 284.999319ms: missing components: kube-dns
	I1213 13:06:08.962853  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:08.962876  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:09.042740  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:09.176461  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:09.232515  407368 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:09.232550  407368 system_pods.go:89] "amd-gpu-device-plugin-dmbzs" [279d1498-a14f-4451-817b-f77e32c0940f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:09.232558  407368 system_pods.go:89] "coredns-66bc5c9577-9s6qd" [0b34457c-f145-4879-a731-40e7dbbfa078] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:09.232564  407368 system_pods.go:89] "csi-hostpath-attacher-0" [a17d16f8-2f00-4320-838f-384ee0b9a07c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:09.232569  407368 system_pods.go:89] "csi-hostpath-resizer-0" [1a4be71b-60b1-418d-9dc4-0d44a44be3ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:09.232575  407368 system_pods.go:89] "csi-hostpathplugin-s7gmj" [d156b5d1-c702-41b9-90fb-6557fd9e680d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:09.232579  407368 system_pods.go:89] "etcd-addons-824997" [88965af2-50d1-43a1-a088-9e7ea0d37438] Running
	I1213 13:06:09.232585  407368 system_pods.go:89] "kindnet-5x6hz" [5b91085a-7daf-4698-937e-59ee22954cde] Running
	I1213 13:06:09.232590  407368 system_pods.go:89] "kube-apiserver-addons-824997" [f5928f87-14d9-46a4-8d5a-8ba16d105ce0] Running
	I1213 13:06:09.232596  407368 system_pods.go:89] "kube-controller-manager-addons-824997" [1a5cb4a1-3733-40fb-b251-37fc23a92a63] Running
	I1213 13:06:09.232607  407368 system_pods.go:89] "kube-ingress-dns-minikube" [44750bbe-963d-418d-951d-edafb4cedd97] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:09.232613  407368 system_pods.go:89] "kube-proxy-98lpp" [14fad822-0002-4720-8b7d-bc0c91ed9b30] Running
	I1213 13:06:09.232619  407368 system_pods.go:89] "kube-scheduler-addons-824997" [299f4dbc-946f-43f2-a644-1c5735baebd4] Running
	I1213 13:06:09.232627  407368 system_pods.go:89] "metrics-server-85b7d694d7-7q9sx" [ae8558d5-7777-4f0b-93db-322b4e89148f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:09.232636  407368 system_pods.go:89] "nvidia-device-plugin-daemonset-vq87l" [e2a48ba5-a70e-40a2-add1-029a7bf0ef4c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:09.232646  407368 system_pods.go:89] "registry-6b586f9694-hh7xr" [6e260024-f4a8-4789-a4ce-0e6144434b7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:09.232655  407368 system_pods.go:89] "registry-creds-764b6fb674-pl9zq" [6f61ff7e-5bac-4fc7-945f-a1393d58c084] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:09.232659  407368 system_pods.go:89] "registry-proxy-99sw8" [ae5fdbb1-065d-40ba-9f98-fb248ffde339] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:09.232665  407368 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cwwk" [47b836cb-86d6-424a-8e11-a53c79e74f88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:09.232672  407368 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fksqt" [2c56a756-f631-43ae-a6a8-e6fbb92ec911] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:09.232677  407368 system_pods.go:89] "storage-provisioner" [908235ee-d117-47df-9462-3e85c24ebf10] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:09.232696  407368 retry.go:31] will retry after 438.416858ms: missing components: kube-dns
	I1213 13:06:09.458830  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:09.459705  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:09.460800  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:09.675750  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:09.676410  407368 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:09.676443  407368 system_pods.go:89] "amd-gpu-device-plugin-dmbzs" [279d1498-a14f-4451-817b-f77e32c0940f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:09.676452  407368 system_pods.go:89] "coredns-66bc5c9577-9s6qd" [0b34457c-f145-4879-a731-40e7dbbfa078] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:09.676461  407368 system_pods.go:89] "csi-hostpath-attacher-0" [a17d16f8-2f00-4320-838f-384ee0b9a07c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:09.676470  407368 system_pods.go:89] "csi-hostpath-resizer-0" [1a4be71b-60b1-418d-9dc4-0d44a44be3ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:09.676479  407368 system_pods.go:89] "csi-hostpathplugin-s7gmj" [d156b5d1-c702-41b9-90fb-6557fd9e680d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:09.676486  407368 system_pods.go:89] "etcd-addons-824997" [88965af2-50d1-43a1-a088-9e7ea0d37438] Running
	I1213 13:06:09.676494  407368 system_pods.go:89] "kindnet-5x6hz" [5b91085a-7daf-4698-937e-59ee22954cde] Running
	I1213 13:06:09.676501  407368 system_pods.go:89] "kube-apiserver-addons-824997" [f5928f87-14d9-46a4-8d5a-8ba16d105ce0] Running
	I1213 13:06:09.676508  407368 system_pods.go:89] "kube-controller-manager-addons-824997" [1a5cb4a1-3733-40fb-b251-37fc23a92a63] Running
	I1213 13:06:09.676517  407368 system_pods.go:89] "kube-ingress-dns-minikube" [44750bbe-963d-418d-951d-edafb4cedd97] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:09.676524  407368 system_pods.go:89] "kube-proxy-98lpp" [14fad822-0002-4720-8b7d-bc0c91ed9b30] Running
	I1213 13:06:09.676530  407368 system_pods.go:89] "kube-scheduler-addons-824997" [299f4dbc-946f-43f2-a644-1c5735baebd4] Running
	I1213 13:06:09.676539  407368 system_pods.go:89] "metrics-server-85b7d694d7-7q9sx" [ae8558d5-7777-4f0b-93db-322b4e89148f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:09.676552  407368 system_pods.go:89] "nvidia-device-plugin-daemonset-vq87l" [e2a48ba5-a70e-40a2-add1-029a7bf0ef4c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:09.676571  407368 system_pods.go:89] "registry-6b586f9694-hh7xr" [6e260024-f4a8-4789-a4ce-0e6144434b7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:09.676579  407368 system_pods.go:89] "registry-creds-764b6fb674-pl9zq" [6f61ff7e-5bac-4fc7-945f-a1393d58c084] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:09.676590  407368 system_pods.go:89] "registry-proxy-99sw8" [ae5fdbb1-065d-40ba-9f98-fb248ffde339] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:09.676599  407368 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cwwk" [47b836cb-86d6-424a-8e11-a53c79e74f88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:09.676612  407368 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fksqt" [2c56a756-f631-43ae-a6a8-e6fbb92ec911] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:09.676621  407368 system_pods.go:89] "storage-provisioner" [908235ee-d117-47df-9462-3e85c24ebf10] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:09.676644  407368 retry.go:31] will retry after 552.097592ms: missing components: kube-dns
	I1213 13:06:09.959912  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:09.960109  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:09.960489  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:10.176174  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:10.233768  407368 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:10.233801  407368 system_pods.go:89] "amd-gpu-device-plugin-dmbzs" [279d1498-a14f-4451-817b-f77e32c0940f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:10.233807  407368 system_pods.go:89] "coredns-66bc5c9577-9s6qd" [0b34457c-f145-4879-a731-40e7dbbfa078] Running
	I1213 13:06:10.233815  407368 system_pods.go:89] "csi-hostpath-attacher-0" [a17d16f8-2f00-4320-838f-384ee0b9a07c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:10.233824  407368 system_pods.go:89] "csi-hostpath-resizer-0" [1a4be71b-60b1-418d-9dc4-0d44a44be3ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:10.233850  407368 system_pods.go:89] "csi-hostpathplugin-s7gmj" [d156b5d1-c702-41b9-90fb-6557fd9e680d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:10.233860  407368 system_pods.go:89] "etcd-addons-824997" [88965af2-50d1-43a1-a088-9e7ea0d37438] Running
	I1213 13:06:10.233865  407368 system_pods.go:89] "kindnet-5x6hz" [5b91085a-7daf-4698-937e-59ee22954cde] Running
	I1213 13:06:10.233869  407368 system_pods.go:89] "kube-apiserver-addons-824997" [f5928f87-14d9-46a4-8d5a-8ba16d105ce0] Running
	I1213 13:06:10.233875  407368 system_pods.go:89] "kube-controller-manager-addons-824997" [1a5cb4a1-3733-40fb-b251-37fc23a92a63] Running
	I1213 13:06:10.233882  407368 system_pods.go:89] "kube-ingress-dns-minikube" [44750bbe-963d-418d-951d-edafb4cedd97] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:10.233886  407368 system_pods.go:89] "kube-proxy-98lpp" [14fad822-0002-4720-8b7d-bc0c91ed9b30] Running
	I1213 13:06:10.233890  407368 system_pods.go:89] "kube-scheduler-addons-824997" [299f4dbc-946f-43f2-a644-1c5735baebd4] Running
	I1213 13:06:10.233895  407368 system_pods.go:89] "metrics-server-85b7d694d7-7q9sx" [ae8558d5-7777-4f0b-93db-322b4e89148f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:10.233901  407368 system_pods.go:89] "nvidia-device-plugin-daemonset-vq87l" [e2a48ba5-a70e-40a2-add1-029a7bf0ef4c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:10.233907  407368 system_pods.go:89] "registry-6b586f9694-hh7xr" [6e260024-f4a8-4789-a4ce-0e6144434b7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:10.233912  407368 system_pods.go:89] "registry-creds-764b6fb674-pl9zq" [6f61ff7e-5bac-4fc7-945f-a1393d58c084] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:10.233917  407368 system_pods.go:89] "registry-proxy-99sw8" [ae5fdbb1-065d-40ba-9f98-fb248ffde339] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:10.233930  407368 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cwwk" [47b836cb-86d6-424a-8e11-a53c79e74f88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:10.233940  407368 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fksqt" [2c56a756-f631-43ae-a6a8-e6fbb92ec911] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:10.233945  407368 system_pods.go:89] "storage-provisioner" [908235ee-d117-47df-9462-3e85c24ebf10] Running
	I1213 13:06:10.233962  407368 system_pods.go:126] duration metric: took 1.65324427s to wait for k8s-apps to be running ...
	I1213 13:06:10.233975  407368 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:06:10.234038  407368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:06:10.250831  407368 system_svc.go:56] duration metric: took 16.843903ms WaitForService to wait for kubelet
	I1213 13:06:10.250869  407368 kubeadm.go:587] duration metric: took 13.900947062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:06:10.250897  407368 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:06:10.254227  407368 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:06:10.254267  407368 node_conditions.go:123] node cpu capacity is 8
	I1213 13:06:10.254285  407368 node_conditions.go:105] duration metric: took 3.381427ms to run NodePressure ...
	I1213 13:06:10.254298  407368 start.go:242] waiting for startup goroutines ...
	I1213 13:06:10.460020  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:10.460268  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:10.460513  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:10.676536  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:10.960265  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:10.960448  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:10.960508  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:11.176889  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:11.460123  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:11.460531  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:11.460591  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:11.675182  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:11.960075  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:11.960086  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:11.960175  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:12.176758  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:12.460258  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:12.460652  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:12.460768  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:12.676076  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:12.959401  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:12.959402  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:12.960894  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:13.176735  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:13.460839  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:13.460889  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:13.460913  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:13.675959  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:13.958793  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:13.958970  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:13.960455  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:14.199886  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:14.460669  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:14.460766  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:14.460826  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:14.675390  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:14.959676  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:14.959868  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:14.960198  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:15.176612  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:15.459611  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:15.459855  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:15.460673  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:15.675358  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:15.959399  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:15.959533  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:15.959984  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:16.176472  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:16.459702  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:16.459724  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:16.460237  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:16.676130  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:16.959407  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:16.959606  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:16.961277  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:17.176054  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:17.459269  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:17.459307  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:17.460616  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:17.676688  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:17.960364  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:17.960633  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:17.960645  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:18.175977  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:18.458960  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:18.459340  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:18.460543  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:18.676434  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:18.959921  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:18.960127  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:18.960159  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:19.176439  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:19.459778  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:19.459958  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:19.460144  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:19.676368  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:19.961492  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:19.961571  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:19.961622  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:20.176733  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:20.459974  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:20.460262  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:20.460437  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:20.676376  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:20.961655  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:20.961657  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:20.961779  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:21.176955  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:21.459644  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:21.459655  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:21.460697  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:21.675898  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:21.959577  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:21.959756  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:21.960536  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:22.176460  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:22.459575  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:22.459815  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:22.460126  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:22.676078  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:22.959575  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:22.959617  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:22.961003  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:23.176094  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:23.459659  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:23.459757  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:23.460708  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:23.675991  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:23.959818  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:23.960883  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:23.961074  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:24.176265  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:24.459739  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:24.459792  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:24.460207  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:24.676104  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:24.959299  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:24.959296  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:24.960773  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:25.175924  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:25.479363  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:25.479616  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:25.479672  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:25.675858  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:25.958978  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:25.959793  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:25.960746  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:26.175757  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:26.459279  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:26.459280  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:26.460630  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:26.676303  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:26.959846  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:26.960255  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:26.960469  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:27.176153  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:27.459664  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:27.459750  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:27.460691  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:27.675401  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:27.959672  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:27.959714  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:27.960745  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:28.175798  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:28.460587  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:28.460675  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:28.460754  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:28.675918  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:28.992508  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:28.992688  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:28.992746  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:29.176120  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:29.611867  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:29.612007  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:29.612151  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:29.675524  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:29.959372  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:29.959408  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:29.959813  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:30.175950  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:30.459259  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:30.459369  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:30.460455  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:30.677521  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:30.960412  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:30.960462  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:30.960488  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:31.176521  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:31.460873  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:31.460882  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:31.460923  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:31.675575  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:32.007971  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:32.008008  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:32.008176  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:32.175744  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:32.460148  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:32.460430  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:32.460545  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:32.676281  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:32.960031  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:32.960076  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:32.960398  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:33.176402  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:33.459876  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:33.460021  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:33.460227  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:33.676029  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:33.959134  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:33.959130  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:33.960749  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:34.176374  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:34.459598  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:34.459786  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:34.460056  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:34.675646  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:34.959558  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:34.959712  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:34.960117  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:35.176273  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:35.460101  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:35.460417  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:35.460450  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:35.676561  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:35.959466  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:35.959701  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:35.960030  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:36.176438  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:36.460002  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:36.460012  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:36.460386  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:36.676209  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:37.027428  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:37.027481  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:37.027566  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:37.176399  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:37.460197  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:37.460205  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:37.460267  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:37.676111  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:37.959503  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:37.959569  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:37.960600  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:38.175674  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:38.459857  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:38.459912  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:38.460223  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:38.676570  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:38.960098  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:38.960274  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:38.961016  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:39.176253  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:39.459934  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:39.460010  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:39.460271  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:39.676604  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:39.963889  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:39.963963  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:39.963972  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:40.176518  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:40.461438  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:40.461788  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:40.463191  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:40.675559  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:41.058415  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:41.058513  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:41.058688  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:41.175416  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:41.460412  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:41.460461  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:41.460692  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:41.676856  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:41.960085  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:41.960159  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:41.960259  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:42.176417  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:42.459772  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:42.459862  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:42.460211  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:42.676403  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:42.959640  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:42.959828  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:42.961800  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:43.175771  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:43.460556  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:43.460699  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:43.460744  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:43.676456  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:43.959680  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:43.959882  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:43.960071  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:44.175870  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:44.459955  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:44.463644  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:44.463878  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:44.675865  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:44.958877  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:44.959667  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:44.960390  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:45.176041  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:45.459912  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:45.459980  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:45.460073  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:45.675537  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:45.959670  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:45.959882  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:45.960138  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:46.175992  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:46.459582  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:46.459656  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:46.460351  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:46.676021  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:46.958987  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:46.959025  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:46.960534  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:47.176154  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:47.459588  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:47.459631  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:47.460558  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:47.675579  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:47.959958  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:47.959987  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:47.960297  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:48.176438  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:48.459365  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:48.459470  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:48.459843  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:48.676081  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:48.959101  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:48.959153  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:48.960442  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:49.176821  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:49.459501  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:49.459501  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:49.460637  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:49.675307  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:49.959564  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:49.959630  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:49.959792  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:50.176437  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:50.460166  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:50.460504  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:50.460669  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:50.677465  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:50.961120  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:50.961229  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:50.961361  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:51.176037  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:51.459599  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:51.459634  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:51.460749  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:51.675774  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:51.960437  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:51.960466  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:51.960609  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:52.176494  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:52.459771  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:52.459904  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:52.460125  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:52.675801  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:52.959832  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:52.959934  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:52.960501  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:53.176168  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:53.459797  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:53.459847  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:53.460741  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:53.676131  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:53.959492  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:53.959635  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:53.960650  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:54.177002  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:54.458892  407368 kapi.go:107] duration metric: took 55.503397362s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 13:06:54.459961  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:54.460909  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:54.675610  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:54.959531  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:54.960363  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:55.177285  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:55.459781  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:55.461126  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:55.675638  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:55.959970  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:55.960568  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:56.177128  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:56.459740  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:56.460135  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:56.676066  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:56.961306  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:56.961593  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:57.175523  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:57.459915  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:57.460288  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:57.676271  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:57.959537  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:57.960019  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:58.176198  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:58.459611  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:58.460444  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:58.676527  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:59.014263  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:59.015062  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:59.176542  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:59.460496  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:59.460664  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:59.675552  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:59.959447  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:59.959972  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:00.179989  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:00.538896  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:00.538909  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:00.675644  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:00.960020  407368 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:00.960858  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:01.211360  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:01.471116  407368 kapi.go:107] duration metric: took 1m2.515135058s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 13:07:01.471134  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:01.676298  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:01.961695  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:02.176732  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:02.460767  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:02.675379  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:02.961738  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:03.175566  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:03.461208  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:03.676265  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:03.961937  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:04.175723  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:04.461005  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:04.675877  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:04.961177  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:05.176550  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:05.461160  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:05.676064  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:05.961495  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:06.176643  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:06.461038  407368 kapi.go:107] duration metric: took 1m7.503545926s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 13:07:06.675829  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:07.175893  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:07.676468  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:08.175675  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:08.676197  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:09.175445  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:09.675735  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:10.175977  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:10.676553  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:11.175568  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:11.675907  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:12.186904  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:12.676956  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:13.176449  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:13.675859  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:14.176521  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:14.675747  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:15.176748  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:15.676221  407368 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:16.175861  407368 kapi.go:107] duration metric: took 1m11.503259982s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 13:07:16.177612  407368 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-824997 cluster.
	I1213 13:07:16.179037  407368 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 13:07:16.180207  407368 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 13:07:16.181688  407368 out.go:179] * Enabled addons: ingress-dns, volcano, storage-provisioner, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, inspektor-gadget, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1213 13:07:16.182925  407368 addons.go:530] duration metric: took 1m19.832962101s for enable addons: enabled=[ingress-dns volcano storage-provisioner nvidia-device-plugin amd-gpu-device-plugin registry-creds inspektor-gadget cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1213 13:07:16.182976  407368 start.go:247] waiting for cluster config update ...
	I1213 13:07:16.183036  407368 start.go:256] writing updated cluster config ...
	I1213 13:07:16.183304  407368 ssh_runner.go:195] Run: rm -f paused
	I1213 13:07:16.187535  407368 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:07:16.275676  407368 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9s6qd" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:16.280222  407368 pod_ready.go:94] pod "coredns-66bc5c9577-9s6qd" is "Ready"
	I1213 13:07:16.280250  407368 pod_ready.go:86] duration metric: took 4.539453ms for pod "coredns-66bc5c9577-9s6qd" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:16.282113  407368 pod_ready.go:83] waiting for pod "etcd-addons-824997" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:16.285754  407368 pod_ready.go:94] pod "etcd-addons-824997" is "Ready"
	I1213 13:07:16.285785  407368 pod_ready.go:86] duration metric: took 3.648121ms for pod "etcd-addons-824997" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:16.287504  407368 pod_ready.go:83] waiting for pod "kube-apiserver-addons-824997" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:16.291076  407368 pod_ready.go:94] pod "kube-apiserver-addons-824997" is "Ready"
	I1213 13:07:16.291101  407368 pod_ready.go:86] duration metric: took 3.575358ms for pod "kube-apiserver-addons-824997" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:16.292864  407368 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-824997" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:16.590965  407368 pod_ready.go:94] pod "kube-controller-manager-addons-824997" is "Ready"
	I1213 13:07:16.590993  407368 pod_ready.go:86] duration metric: took 298.107731ms for pod "kube-controller-manager-addons-824997" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:16.792708  407368 pod_ready.go:83] waiting for pod "kube-proxy-98lpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:17.190890  407368 pod_ready.go:94] pod "kube-proxy-98lpp" is "Ready"
	I1213 13:07:17.190927  407368 pod_ready.go:86] duration metric: took 398.187902ms for pod "kube-proxy-98lpp" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:17.391552  407368 pod_ready.go:83] waiting for pod "kube-scheduler-addons-824997" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:17.791611  407368 pod_ready.go:94] pod "kube-scheduler-addons-824997" is "Ready"
	I1213 13:07:17.791642  407368 pod_ready.go:86] duration metric: took 400.063906ms for pod "kube-scheduler-addons-824997" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:07:17.791654  407368 pod_ready.go:40] duration metric: took 1.604094704s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:07:17.836131  407368 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:07:17.838243  407368 out.go:179] * Done! kubectl is now configured to use "addons-824997" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                       NAMESPACE
	1c8519616a7e1       a236f84b9d5d2       4 minutes ago       Running             nginx                     0                   191c86e6b438e       nginx                                     default
	081bc13cbf9c6       56cc512116c8f       5 minutes ago       Running             busybox                   0                   208b5ad2d1c3d       busybox                                   default
	d4f5a8416e5a3       e16d1e3a10667       6 minutes ago       Running             local-path-provisioner    0                   015f4ade74be6       local-path-provisioner-648f6765c9-wv987   local-path-storage
	ef3e180105d34       6e38f40d628db       7 minutes ago       Running             storage-provisioner       0                   0f0cfed499bfe       storage-provisioner                       kube-system
	5d618af0d8e1e       52546a367cc9e       7 minutes ago       Running             coredns                   0                   9ebb74598f656       coredns-66bc5c9577-9s6qd                  kube-system
	e8fa7008cca94       409467f978b4a       7 minutes ago       Running             kindnet-cni               0                   95789f14b701f       kindnet-5x6hz                             kube-system
	cd49174b103b3       8aa150647e88a       7 minutes ago       Running             kube-proxy                0                   82bf13b8ebfb6       kube-proxy-98lpp                          kube-system
	6cdd58518b568       88320b5498ff2       7 minutes ago       Running             kube-scheduler            0                   df6ba47ee51d5       kube-scheduler-addons-824997              kube-system
	a1d41569800a9       01e8bacf0f500       7 minutes ago       Running             kube-controller-manager   0                   3a631d590e0ee       kube-controller-manager-addons-824997     kube-system
	54d63915291d4       a5f569d49a979       7 minutes ago       Running             kube-apiserver            0                   c87eb6f433593       kube-apiserver-addons-824997              kube-system
	42e7fc7489561       a3e246e9556e9       7 minutes ago       Running             etcd                      0                   50c87ed2afb79       etcd-addons-824997                        kube-system
	
	
	==> containerd <==
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.159013843Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14fad822_0002_4720_8b7d_bc0c91ed9b30.slice/cri-containerd-cd49174b103b3c3d9f0ae534c24a125eb3381a4da34117061fc2fe5ca2e42427.scope/hugetlb.1GB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.159866400Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac991c1c73bd28d02d5ee45aa8d52fe9.slice/cri-containerd-42e7fc74895612e816d421de479e9d1be9671561207dce5a3d4baade21a21ead.scope/hugetlb.2MB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.159978398Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac991c1c73bd28d02d5ee45aa8d52fe9.slice/cri-containerd-42e7fc74895612e816d421de479e9d1be9671561207dce5a3d4baade21a21ead.scope/hugetlb.1GB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.160747092Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0946841f_366e_4ef7_813d_2e659f071117.slice/cri-containerd-d4f5a8416e5a31550c8a3d383f1b861c3e3eba2e9eca5617082e722d81dd98eb.scope/hugetlb.2MB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.160839192Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0946841f_366e_4ef7_813d_2e659f071117.slice/cri-containerd-d4f5a8416e5a31550c8a3d383f1b861c3e3eba2e9eca5617082e722d81dd98eb.scope/hugetlb.1GB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.161613301Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod438c306ccf5c43b8d395343d84cd08ff.slice/cri-containerd-6cdd58518b568d86b1289c158b8e6af4c728317662ecb1a71e942ff6cf227d3e.scope/hugetlb.2MB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.161718597Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod438c306ccf5c43b8d395343d84cd08ff.slice/cri-containerd-6cdd58518b568d86b1289c158b8e6af4c728317662ecb1a71e942ff6cf227d3e.scope/hugetlb.1GB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.162553568Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod908235ee_d117_47df_9462_3e85c24ebf10.slice/cri-containerd-ef3e180105d3499836be1143d99c98d35c936107cc4d223477d25e36dde57d8a.scope/hugetlb.2MB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.162664172Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod908235ee_d117_47df_9462_3e85c24ebf10.slice/cri-containerd-ef3e180105d3499836be1143d99c98d35c936107cc4d223477d25e36dde57d8a.scope/hugetlb.1GB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.163348687Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e4639c7_239a_4123_bbb0_89f66eac9682.slice/cri-containerd-1c8519616a7e1e1e9f7a0eaae2eea09fe8d4f3ac20896fd73405e2b89ab313b3.scope/hugetlb.2MB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.163432640Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e4639c7_239a_4123_bbb0_89f66eac9682.slice/cri-containerd-1c8519616a7e1e1e9f7a0eaae2eea09fe8d4f3ac20896fd73405e2b89ab313b3.scope/hugetlb.1GB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.164201181Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13bf900c_a6e0_4525_ab12_0eec78133355.slice/cri-containerd-081bc13cbf9c66be8e9b156bdfef2a6aa8c4722239714c330a2bd1c47def7df3.scope/hugetlb.2MB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.164273134Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13bf900c_a6e0_4525_ab12_0eec78133355.slice/cri-containerd-081bc13cbf9c66be8e9b156bdfef2a6aa8c4722239714c330a2bd1c47def7df3.scope/hugetlb.1GB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.165130407Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b34457c_f145_4879_a731_40e7dbbfa078.slice/cri-containerd-5d618af0d8e1e55dac172d8582f0bb0143ee3859bf707bebfe747ed5cf9f7b30.scope/hugetlb.2MB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.165252473Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b34457c_f145_4879_a731_40e7dbbfa078.slice/cri-containerd-5d618af0d8e1e55dac172d8582f0bb0143ee3859bf707bebfe747ed5cf9f7b30.scope/hugetlb.1GB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.165946147Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30467930691cb235f2565e601933f3cc.slice/cri-containerd-a1d41569800a9d997b5cd673672b36cc43180b45b3c4e122143597df514cea5f.scope/hugetlb.2MB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.166030508Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30467930691cb235f2565e601933f3cc.slice/cri-containerd-a1d41569800a9d997b5cd673672b36cc43180b45b3c4e122143597df514cea5f.scope/hugetlb.1GB.events\""
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.196096180Z" level=info msg="container event discarded" container=5527a2abd7149828a2f5a5aa818e530b70515686952f2de20b6dc5f42ca5e0db type=CONTAINER_STOPPED_EVENT
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.246629460Z" level=info msg="container event discarded" container=75078ce1871eb36110264d5ff372b176fc27c283719a0202be982dcf18706c7a type=CONTAINER_STOPPED_EVENT
	Dec 13 13:13:12 addons-824997 containerd[664]: time="2025-12-13T13:13:12.399770825Z" level=info msg="container event discarded" container=5527a2abd7149828a2f5a5aa818e530b70515686952f2de20b6dc5f42ca5e0db type=CONTAINER_DELETED_EVENT
	Dec 13 13:13:18 addons-824997 containerd[664]: time="2025-12-13T13:13:18.040884287Z" level=info msg="container event discarded" container=070b75266029b670eb250c5e99bc8f7528f149e9bcf5f4d9938255929bc61e00 type=CONTAINER_CREATED_EVENT
	Dec 13 13:13:18 addons-824997 containerd[664]: time="2025-12-13T13:13:18.040966795Z" level=info msg="container event discarded" container=070b75266029b670eb250c5e99bc8f7528f149e9bcf5f4d9938255929bc61e00 type=CONTAINER_STARTED_EVENT
	Dec 13 13:13:18 addons-824997 containerd[664]: time="2025-12-13T13:13:18.204988808Z" level=info msg="container event discarded" container=589ae6eaf71b034ad14c8d0e27e3a64cdc748a0dbc4d27b85f5670d37c6c4e76 type=CONTAINER_STOPPED_EVENT
	Dec 13 13:13:18 addons-824997 containerd[664]: time="2025-12-13T13:13:18.263810260Z" level=info msg="container event discarded" container=49544e19a21e5c4cbaa763bdd073fff9de17729fb4f0e3d9b6c8f7101eac87eb type=CONTAINER_STOPPED_EVENT
	Dec 13 13:13:18 addons-824997 containerd[664]: time="2025-12-13T13:13:18.423084769Z" level=info msg="container event discarded" container=589ae6eaf71b034ad14c8d0e27e3a64cdc748a0dbc4d27b85f5670d37c6c4e76 type=CONTAINER_DELETED_EVENT
	
	
	==> coredns [5d618af0d8e1e55dac172d8582f0bb0143ee3859bf707bebfe747ed5cf9f7b30] <==
	[INFO] 10.244.0.25:35744 - 52595 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000334624s
	[INFO] 10.244.0.25:39765 - 15204 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000048319s
	[INFO] 10.244.0.25:33231 - 18313 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.00031486s
	[INFO] 10.244.0.25:51969 - 23092 "AAAA IN hello-world-app.default.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 98 false 512" NXDOMAIN qr,aa,rd,ra 209 0.00020037s
	[INFO] 10.244.0.25:58274 - 65040 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,aa,rd,ra 188 0.000446644s
	[INFO] 10.244.0.25:45391 - 25346 "A IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,aa,rd,ra 188 0.000118498s
	[INFO] 10.244.0.25:35744 - 15806 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000091241s
	[INFO] 10.244.0.25:54261 - 59858 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000094468s
	[INFO] 10.244.0.25:50146 - 59424 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000092449s
	[INFO] 10.244.0.25:51969 - 48603 "A IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,aa,rd,ra 188 0.000052365s
	[INFO] 10.244.0.25:45391 - 1751 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,aa,rd,ra 188 0.000064048s
	[INFO] 10.244.0.25:39765 - 1893 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000137085s
	[INFO] 10.244.0.25:33231 - 54326 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000229517s
	[INFO] 10.244.0.25:58274 - 36532 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000294557s
	[INFO] 10.244.0.25:35744 - 9733 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000481281s
	[INFO] 10.244.0.25:45391 - 20769 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000495368s
	[INFO] 10.244.0.25:51969 - 58831 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,aa,rd,ra 188 0.000528832s
	[INFO] 10.244.0.25:33231 - 13052 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000668767s
	[INFO] 10.244.0.25:58274 - 5986 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000581313s
	[INFO] 10.244.0.25:45391 - 63248 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.00042289s
	[INFO] 10.244.0.25:51969 - 4931 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000505692s
	[INFO] 10.244.0.25:58274 - 42026 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000427297s
	[INFO] 10.244.0.25:45391 - 12275 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000310796s
	[INFO] 10.244.0.25:51969 - 56036 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000119492s
	[INFO] 10.244.0.25:51969 - 16491 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000110026s
	
	
	==> describe nodes <==
	Name:               addons-824997
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-824997
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=addons-824997
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_05_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-824997
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:05:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-824997
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:13:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:08:54 +0000   Sat, 13 Dec 2025 13:05:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:08:54 +0000   Sat, 13 Dec 2025 13:05:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:08:54 +0000   Sat, 13 Dec 2025 13:05:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:08:54 +0000   Sat, 13 Dec 2025 13:06:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-824997
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                21b6b6da-b8fa-4450-98c5-681fbb9b4901
	  Boot ID:                    90a4a0ca-634d-4c7c-8727-6b2f644cc467
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  default                     hello-world-app-5d498dc89-7zrqg                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 coredns-66bc5c9577-9s6qd                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m22s
	  kube-system                 etcd-addons-824997                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m28s
	  kube-system                 kindnet-5x6hz                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m22s
	  kube-system                 kube-apiserver-addons-824997                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-controller-manager-addons-824997                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-proxy-98lpp                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-scheduler-addons-824997                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  local-path-storage          helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-wv987                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m20s  kube-proxy       
	  Normal  Starting                 7m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m28s  kubelet          Node addons-824997 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s  kubelet          Node addons-824997 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m28s  kubelet          Node addons-824997 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m23s  node-controller  Node addons-824997 event: Registered Node addons-824997 in Controller
	  Normal  NodeReady                7m10s  kubelet          Node addons-824997 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[ +15.550392] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 5b b2 4e f6 0c 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[Dec13 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	[  +0.000156] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 2b b1 e9 34 e9 08 06
	[  +9.601084] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[  +6.680640] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 7a 15 04 2e f9 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 26 9c 63 03 a8 a5 08 06
	[  +0.000500] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e bf e9 59 0c fc 08 06
	[ +14.220693] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 6b 48 e9 3e 65 08 06
	[  +0.000354] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[ +17.192216] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 ce b1 a0 1c 7b 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	
	
	==> etcd [42e7fc74895612e816d421de479e9d1be9671561207dce5a3d4baade21a21ead] <==
	{"level":"warn","ts":"2025-12-13T13:05:59.540495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44308","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:06:13.931859Z","caller":"traceutil/trace.go:172","msg":"trace[365634714] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"110.43565ms","start":"2025-12-13T13:06:13.821403Z","end":"2025-12-13T13:06:13.931838Z","steps":["trace[365634714] 'process raft request'  (duration: 110.296281ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:06:25.390597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.400493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.433497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.471528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.498287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.509001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.515513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.523819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.535213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.541441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.550921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:25.561516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:06:29.609609Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.720831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-13T13:06:29.609699Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.867324ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:06:29.609714Z","caller":"traceutil/trace.go:172","msg":"trace[214234419] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1163; }","duration":"151.875778ms","start":"2025-12-13T13:06:29.457823Z","end":"2025-12-13T13:06:29.609699Z","steps":["trace[214234419] 'range keys from in-memory index tree'  (duration: 151.648459ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:29.609732Z","caller":"traceutil/trace.go:172","msg":"trace[886235972] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1163; }","duration":"151.900709ms","start":"2025-12-13T13:06:29.457823Z","end":"2025-12-13T13:06:29.609724Z","steps":["trace[886235972] 'range keys from in-memory index tree'  (duration: 151.800705ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:06:29.609655Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.099869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:06:29.609824Z","caller":"traceutil/trace.go:172","msg":"trace[384940195] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1163; }","duration":"150.275567ms","start":"2025-12-13T13:06:29.459535Z","end":"2025-12-13T13:06:29.609810Z","steps":["trace[384940195] 'range keys from in-memory index tree'  (duration: 150.01893ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:41.055469Z","caller":"traceutil/trace.go:172","msg":"trace[963304637] transaction","detail":"{read_only:false; response_revision:1224; number_of_response:1; }","duration":"115.54496ms","start":"2025-12-13T13:06:40.939903Z","end":"2025-12-13T13:06:41.055448Z","steps":["trace[963304637] 'process raft request'  (duration: 115.325999ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:07:01.469564Z","caller":"traceutil/trace.go:172","msg":"trace[1376994307] transaction","detail":"{read_only:false; response_revision:1373; number_of_response:1; }","duration":"118.824251ms","start":"2025-12-13T13:07:01.350715Z","end":"2025-12-13T13:07:01.469539Z","steps":["trace[1376994307] 'process raft request'  (duration: 118.59224ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:07:48.113151Z","caller":"traceutil/trace.go:172","msg":"trace[580142504] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1559; }","duration":"123.938832ms","start":"2025-12-13T13:07:47.989191Z","end":"2025-12-13T13:07:48.113129Z","steps":["trace[580142504] 'process raft request'  (duration: 110.279204ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:07:48.113364Z","caller":"traceutil/trace.go:172","msg":"trace[968379634] transaction","detail":"{read_only:false; response_revision:1560; number_of_response:1; }","duration":"122.898721ms","start":"2025-12-13T13:07:47.990448Z","end":"2025-12-13T13:07:48.113347Z","steps":["trace[968379634] 'process raft request'  (duration: 122.620165ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:08:24.496219Z","caller":"traceutil/trace.go:172","msg":"trace[1179467441] transaction","detail":"{read_only:false; response_revision:1870; number_of_response:1; }","duration":"119.256548ms","start":"2025-12-13T13:08:24.376945Z","end":"2025-12-13T13:08:24.496201Z","steps":["trace[1179467441] 'process raft request'  (duration: 119.160435ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:13:18 up  1:55,  0 user,  load average: 0.05, 0.51, 1.04
	Linux addons-824997 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e8fa7008cca946a39028ebcfeb3fa2a27f8a9af4a3496f19016d1179fdd1604f] <==
	I1213 13:11:18.146790       1 main.go:301] handling current node
	I1213 13:11:28.146782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:11:28.146823       1 main.go:301] handling current node
	I1213 13:11:38.155023       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:11:38.155057       1 main.go:301] handling current node
	I1213 13:11:48.147462       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:11:48.147501       1 main.go:301] handling current node
	I1213 13:11:58.146792       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:11:58.146835       1 main.go:301] handling current node
	I1213 13:12:08.147483       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:12:08.147552       1 main.go:301] handling current node
	I1213 13:12:18.147410       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:12:18.147475       1 main.go:301] handling current node
	I1213 13:12:28.155575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:12:28.155619       1 main.go:301] handling current node
	I1213 13:12:38.150074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:12:38.150137       1 main.go:301] handling current node
	I1213 13:12:48.155423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:12:48.155469       1 main.go:301] handling current node
	I1213 13:12:58.146782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:12:58.146876       1 main.go:301] handling current node
	I1213 13:13:08.147366       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:13:08.147417       1 main.go:301] handling current node
	I1213 13:13:18.150492       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:13:18.150534       1 main.go:301] handling current node
	
	
	==> kube-apiserver [54d63915291d413556538ed8188dd84d193160c75bb0684acac8ace555f837eb] <==
	W1213 13:07:49.390149       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1213 13:07:49.416388       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1213 13:07:49.429955       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1213 13:07:49.563287       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1213 13:07:49.789085       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1213 13:07:49.887900       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1213 13:08:08.228470       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57266: use of closed network connection
	E1213 13:08:08.406379       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57290: use of closed network connection
	I1213 13:08:18.097695       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.140.112"}
	I1213 13:08:35.320708       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 13:08:35.492287       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.184.188"}
	I1213 13:08:45.711277       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1213 13:08:47.026341       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.166.147"}
	I1213 13:08:57.411154       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 13:08:57.411205       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 13:08:57.427218       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 13:08:57.427268       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 13:08:57.440532       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 13:08:57.440576       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 13:08:57.463815       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 13:08:57.463930       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1213 13:08:58.428185       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1213 13:08:58.463972       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1213 13:08:58.485365       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1213 13:09:27.868561       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [a1d41569800a9d997b5cd673672b36cc43180b45b3c4e122143597df514cea5f] <==
	E1213 13:12:28.194418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:12:31.755929       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:12:31.757033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:12:35.250155       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:12:35.251156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:12:37.756384       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:12:37.757404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:12:40.689538       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:12:40.690614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:12:45.113576       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:12:45.114796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:12:47.565359       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:12:47.566309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:12:50.090356       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:12:50.091487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:12:52.245809       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:12:52.246887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:12:55.717571       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:12:55.718587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:12:57.428335       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:12:57.429452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:13:07.137018       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:13:07.138109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:13:13.673497       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:13:13.674720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [cd49174b103b3c3d9f0ae534c24a125eb3381a4da34117061fc2fe5ca2e42427] <==
	I1213 13:05:57.431708       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:05:57.538084       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:05:57.640247       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:05:57.640345       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 13:05:57.640462       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:05:57.693692       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:05:57.693794       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:05:57.721751       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:05:57.730155       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:05:57.730190       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:05:57.738635       1 config.go:309] "Starting node config controller"
	I1213 13:05:57.738660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:05:57.739103       1 config.go:200] "Starting service config controller"
	I1213 13:05:57.739114       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:05:57.739130       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:05:57.739136       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:05:57.739149       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:05:57.739154       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:05:57.839049       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:05:57.840244       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:05:57.840299       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:05:57.840635       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6cdd58518b568d86b1289c158b8e6af4c728317662ecb1a71e942ff6cf227d3e] <==
	E1213 13:05:48.617884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:05:48.618032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:05:48.618214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 13:05:48.618262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:05:48.618212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:05:48.618297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 13:05:48.618456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:05:48.618524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:05:48.618577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:05:48.618566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:05:48.618634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:05:48.618709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:05:48.618709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:05:48.618763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:05:48.618929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:05:48.618959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:05:48.618980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:05:49.427257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:05:49.433225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:05:49.479445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:05:49.509629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:05:49.531143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:05:49.608733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:05:49.702463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1213 13:05:52.716401       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:12:33 addons-824997 kubelet[1416]: I1213 13:12:33.828419    1416 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/64800cf4-f039-4e49-ab7d-34d3921f718f-script\") on node \"addons-824997\" DevicePath \"\""
	Dec 13 13:12:33 addons-824997 kubelet[1416]: I1213 13:12:33.828458    1416 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4jxh2\" (UniqueName: \"kubernetes.io/projected/64800cf4-f039-4e49-ab7d-34d3921f718f-kube-api-access-4jxh2\") on node \"addons-824997\" DevicePath \"\""
	Dec 13 13:12:33 addons-824997 kubelet[1416]: I1213 13:12:33.828470    1416 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/64800cf4-f039-4e49-ab7d-34d3921f718f-data\") on node \"addons-824997\" DevicePath \"\""
	Dec 13 13:12:34 addons-824997 kubelet[1416]: I1213 13:12:34.706112    1416 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64800cf4-f039-4e49-ab7d-34d3921f718f" path="/var/lib/kubelet/pods/64800cf4-f039-4e49-ab7d-34d3921f718f/volumes"
	Dec 13 13:12:45 addons-824997 kubelet[1416]: E1213 13:12:45.703819    1416 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:1.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-7zrqg" podUID="129cafd7-8868-426c-9db8-8a2635893a27"
	Dec 13 13:13:00 addons-824997 kubelet[1416]: E1213 13:13:00.705885    1416 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:1.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-7zrqg" podUID="129cafd7-8868-426c-9db8-8a2635893a27"
	Dec 13 13:13:03 addons-824997 kubelet[1416]: I1213 13:13:03.604892    1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/ccc62485-2f8d-4012-872d-4ef25b8bb4de-script\") pod \"helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24\" (UID: \"ccc62485-2f8d-4012-872d-4ef25b8bb4de\") " pod="local-path-storage/helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24"
	Dec 13 13:13:03 addons-824997 kubelet[1416]: I1213 13:13:03.604953    1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/ccc62485-2f8d-4012-872d-4ef25b8bb4de-data\") pod \"helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24\" (UID: \"ccc62485-2f8d-4012-872d-4ef25b8bb4de\") " pod="local-path-storage/helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24"
	Dec 13 13:13:03 addons-824997 kubelet[1416]: I1213 13:13:03.605020    1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz2dq\" (UniqueName: \"kubernetes.io/projected/ccc62485-2f8d-4012-872d-4ef25b8bb4de-kube-api-access-cz2dq\") pod \"helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24\" (UID: \"ccc62485-2f8d-4012-872d-4ef25b8bb4de\") " pod="local-path-storage/helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24"
	Dec 13 13:13:06 addons-824997 kubelet[1416]: E1213 13:13:06.576215    1416 log.go:32] "PullImage from image service failed" err=<
	Dec 13 13:13:06 addons-824997 kubelet[1416]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests
	Dec 13 13:13:06 addons-824997 kubelet[1416]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 13 13:13:06 addons-824997 kubelet[1416]:  > image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 13 13:13:06 addons-824997 kubelet[1416]: E1213 13:13:06.576275    1416 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 13 13:13:06 addons-824997 kubelet[1416]:         failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests
	Dec 13 13:13:06 addons-824997 kubelet[1416]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 13 13:13:06 addons-824997 kubelet[1416]:  > image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 13 13:13:06 addons-824997 kubelet[1416]: E1213 13:13:06.576396    1416 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Dec 13 13:13:06 addons-824997 kubelet[1416]:         container helper-pod start failed in pod helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24_local-path-storage(ccc62485-2f8d-4012-872d-4ef25b8bb4de): ErrImagePull: failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests
	Dec 13 13:13:06 addons-824997 kubelet[1416]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 13 13:13:06 addons-824997 kubelet[1416]:  > logger="UnhandledError"
	Dec 13 13:13:06 addons-824997 kubelet[1416]: E1213 13:13:06.576432    1416 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24" podUID="ccc62485-2f8d-4012-872d-4ef25b8bb4de"
	Dec 13 13:13:07 addons-824997 kubelet[1416]: E1213 13:13:07.238459    1416 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24" podUID="ccc62485-2f8d-4012-872d-4ef25b8bb4de"
	Dec 13 13:13:12 addons-824997 kubelet[1416]: I1213 13:13:12.703237    1416 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:13:13 addons-824997 kubelet[1416]: E1213 13:13:13.704131    1416 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:1.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-7zrqg" podUID="129cafd7-8868-426c-9db8-8a2635893a27"
	
	
	==> storage-provisioner [ef3e180105d3499836be1143d99c98d35c936107cc4d223477d25e36dde57d8a] <==
	W1213 13:12:54.813697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:12:56.816308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:12:56.821006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:12:58.824476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:12:58.828259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:00.832014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:00.835927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:02.839049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:02.843079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:04.846740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:04.850690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:06.853718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:06.857851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:08.860636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:08.864624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:10.868448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:10.872378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:12.875995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:12.881202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:14.884309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:14.890293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:16.893753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:16.898675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:18.902452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:13:18.906976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-824997 -n addons-824997
helpers_test.go:270: (dbg) Run:  kubectl --context addons-824997 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-7zrqg test-local-path helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-824997 describe pod hello-world-app-5d498dc89-7zrqg test-local-path helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-824997 describe pod hello-world-app-5d498dc89-7zrqg test-local-path helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24: exit status 1 (72.120717ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-7zrqg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-824997/192.168.49.2
	Start Time:       Sat, 13 Dec 2025 13:08:46 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.34
	IPs:
	  IP:           10.244.0.34
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-92vbs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-92vbs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age    From               Message
	  ----     ------     ----   ----               -------
	  Normal   Scheduled  4m33s  default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-7zrqg to addons-824997
	  Warning  Failed     4m13s  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": failed to pull and unpack image "docker.io/kicbase/echo-server:1.0": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  100s (x5 over 4m32s)  kubelet  Pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed   98s (x4 over 4m30s)   kubelet  Failed to pull image "docker.io/kicbase/echo-server:1.0": failed to pull and unpack image "docker.io/kicbase/echo-server:1.0": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   98s (x5 over 4m30s)   kubelet  Error: ErrImagePull
	  Warning  Failed   34s (x15 over 4m29s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  6s (x17 over 4m29s)   kubelet  Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26czf (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-26czf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-824997 describe pod hello-world-app-5d498dc89-7zrqg test-local-path helper-pod-create-pvc-69b3bcb2-23fb-4428-9dd8-2694196e4f24: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/LocalPath (302.49s)

                                                
                                    
x
+
TestDockerEnvContainerd (41.3s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-252506 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-252506 --driver=docker  --container-runtime=containerd: (22.933743645s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-252506"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXVCuFdv/agent.433896" SSH_AGENT_PID="433897" DOCKER_HOST=ssh://docker@127.0.0.1:33157 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXVCuFdv/agent.433896" SSH_AGENT_PID="433897" DOCKER_HOST=ssh://docker@127.0.0.1:33157 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXVCuFdv/agent.433896" SSH_AGENT_PID="433897" DOCKER_HOST=ssh://docker@127.0.0.1:33157 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (2.258955213s)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXVCuFdv/agent.433896" SSH_AGENT_PID="433897" DOCKER_HOST=ssh://docker@127.0.0.1:33157 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:615: *** TestDockerEnvContainerd FAILED at 2025-12-13 13:14:15.019373495 +0000 UTC m=+575.968974798
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect dockerenv-252506
helpers_test.go:244: (dbg) docker inspect dockerenv-252506:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e51a8fccb28462a4f7e6660b32dca69bd853e22db7446d6d97b0e01050e1923",
	        "Created": "2025-12-13T13:13:42.603875291Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431353,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:13:42.641595467Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/0e51a8fccb28462a4f7e6660b32dca69bd853e22db7446d6d97b0e01050e1923/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e51a8fccb28462a4f7e6660b32dca69bd853e22db7446d6d97b0e01050e1923/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e51a8fccb28462a4f7e6660b32dca69bd853e22db7446d6d97b0e01050e1923/hosts",
	        "LogPath": "/var/lib/docker/containers/0e51a8fccb28462a4f7e6660b32dca69bd853e22db7446d6d97b0e01050e1923/0e51a8fccb28462a4f7e6660b32dca69bd853e22db7446d6d97b0e01050e1923-json.log",
	        "Name": "/dockerenv-252506",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-252506:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-252506",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e51a8fccb28462a4f7e6660b32dca69bd853e22db7446d6d97b0e01050e1923",
	                "LowerDir": "/var/lib/docker/overlay2/d457703ba9bfa9b0f985da82563045bb71a355678da9ff0d4111d12d301bff84-init/diff:/var/lib/docker/overlay2/be5aa5e3490e76c6aea57ece480ce7168b4c08e9f5040b5571a6aeb87c809618/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d457703ba9bfa9b0f985da82563045bb71a355678da9ff0d4111d12d301bff84/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d457703ba9bfa9b0f985da82563045bb71a355678da9ff0d4111d12d301bff84/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d457703ba9bfa9b0f985da82563045bb71a355678da9ff0d4111d12d301bff84/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-252506",
	                "Source": "/var/lib/docker/volumes/dockerenv-252506/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-252506",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-252506",
	                "name.minikube.sigs.k8s.io": "dockerenv-252506",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6f18d4d3d1ec7d7a3b2e15d968790587a0b5b28f6ba55a2c4ed4b4e10fd75fd7",
	            "SandboxKey": "/var/run/docker/netns/6f18d4d3d1ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33157"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ]
	            },
	            "Networks": {
	                "dockerenv-252506": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4f3dd4b90db20ead3d25ae616054170098c77c8c6f03576b2c6bf5f69e39a913",
	                    "EndpointID": "2450d4026942000b2831d993760d75c1d6c95568b65e93d33c2185e404c15a15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "66:2c:e2:3e:20:e6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-252506",
	                        "0e51a8fccb28"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-252506 -n dockerenv-252506
helpers_test.go:253: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-252506 logs -n 25
helpers_test.go:261: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬─────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                      ARGS                                       │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼─────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons     │ addons-824997 addons disable nvidia-device-plugin --alsologtostderr -v=1        │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable cloud-spanner --alsologtostderr -v=1               │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable headlamp --alsologtostderr -v=1                    │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ ip         │ addons-824997 ip                                                                │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable registry --alsologtostderr -v=1                    │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable metrics-server --alsologtostderr -v=1              │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable inspektor-gadget --alsologtostderr -v=1            │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ ssh        │ addons-824997 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'        │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ ip         │ addons-824997 ip                                                                │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable ingress-dns --alsologtostderr -v=1                 │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable ingress --alsologtostderr -v=1                     │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable amd-gpu-device-plugin --alsologtostderr -v=1       │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-824997  │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable registry-creds --alsologtostderr -v=1              │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable volumesnapshots --alsologtostderr -v=1             │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons     │ addons-824997 addons disable csi-hostpath-driver --alsologtostderr -v=1         │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:09 UTC │
	│ addons     │ addons-824997 addons disable yakd --alsologtostderr -v=1                        │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:09 UTC │
	│ addons     │ addons-824997 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ stop       │ -p addons-824997                                                                │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ addons     │ enable dashboard -p addons-824997                                               │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ addons     │ disable dashboard -p addons-824997                                              │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ addons     │ disable gvisor -p addons-824997                                                 │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ delete     │ -p addons-824997                                                                │ addons-824997    │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ start      │ -p dockerenv-252506 --driver=docker  --container-runtime=containerd             │ dockerenv-252506 │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:14 UTC │
	│ docker-env │ --ssh-host --ssh-add -p dockerenv-252506                                        │ dockerenv-252506 │ jenkins │ v1.37.0 │ 13 Dec 25 13:14 UTC │ 13 Dec 25 13:14 UTC │
	└────────────┴─────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:13:37
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:13:37.917019  430785 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:13:37.917118  430785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:13:37.917122  430785 out.go:374] Setting ErrFile to fd 2...
	I1213 13:13:37.917125  430785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:13:37.917292  430785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:13:37.917747  430785 out.go:368] Setting JSON to false
	I1213 13:13:37.918638  430785 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6961,"bootTime":1765624657,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:13:37.918691  430785 start.go:143] virtualization: kvm guest
	I1213 13:13:37.921008  430785 out.go:179] * [dockerenv-252506] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:13:37.922302  430785 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:13:37.922304  430785 notify.go:221] Checking for updates...
	I1213 13:13:37.925104  430785 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:13:37.926462  430785 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:13:37.927717  430785 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:13:37.929077  430785 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:13:37.930419  430785 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:13:37.932014  430785 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:13:37.956569  430785 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:13:37.956673  430785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:13:38.014267  430785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-13 13:13:38.002588377 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:13:38.014393  430785 docker.go:319] overlay module found
	I1213 13:13:38.016410  430785 out.go:179] * Using the docker driver based on user configuration
	I1213 13:13:38.017936  430785 start.go:309] selected driver: docker
	I1213 13:13:38.017950  430785 start.go:927] validating driver "docker" against <nil>
	I1213 13:13:38.017962  430785 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:13:38.018084  430785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:13:38.077126  430785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-13 13:13:38.066371849 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:13:38.077287  430785 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:13:38.077863  430785 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1213 13:13:38.078018  430785 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:13:38.080304  430785 out.go:179] * Using Docker driver with root privileges
	I1213 13:13:38.081562  430785 cni.go:84] Creating CNI manager for ""
	I1213 13:13:38.081625  430785 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:13:38.081633  430785 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:13:38.081701  430785 start.go:353] cluster config:
	{Name:dockerenv-252506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:dockerenv-252506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:13:38.083264  430785 out.go:179] * Starting "dockerenv-252506" primary control-plane node in "dockerenv-252506" cluster
	I1213 13:13:38.084506  430785 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 13:13:38.085990  430785 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:13:38.087214  430785 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 13:13:38.087242  430785 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1213 13:13:38.087249  430785 cache.go:65] Caching tarball of preloaded images
	I1213 13:13:38.087305  430785 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:13:38.087369  430785 preload.go:238] Found /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 13:13:38.087381  430785 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 13:13:38.087727  430785 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/config.json ...
	I1213 13:13:38.087745  430785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/config.json: {Name:mka400a9b5fa245785999da5be4de5666cca9230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:13:38.108168  430785 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:13:38.108186  430785 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:13:38.108202  430785 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:13:38.108231  430785 start.go:360] acquireMachinesLock for dockerenv-252506: {Name:mkab4ae57de95e205861a51b4abbd428230835f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:13:38.108334  430785 start.go:364] duration metric: took 87.946µs to acquireMachinesLock for "dockerenv-252506"
	I1213 13:13:38.108359  430785 start.go:93] Provisioning new machine with config: &{Name:dockerenv-252506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:dockerenv-252506 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 13:13:38.108425  430785 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:13:38.110433  430785 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1213 13:13:38.110635  430785 start.go:159] libmachine.API.Create for "dockerenv-252506" (driver="docker")
	I1213 13:13:38.110660  430785 client.go:173] LocalClient.Create starting
	I1213 13:13:38.110714  430785 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem
	I1213 13:13:38.110742  430785 main.go:143] libmachine: Decoding PEM data...
	I1213 13:13:38.110757  430785 main.go:143] libmachine: Parsing certificate...
	I1213 13:13:38.110809  430785 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem
	I1213 13:13:38.110824  430785 main.go:143] libmachine: Decoding PEM data...
	I1213 13:13:38.110832  430785 main.go:143] libmachine: Parsing certificate...
	I1213 13:13:38.111159  430785 cli_runner.go:164] Run: docker network inspect dockerenv-252506 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:13:38.128193  430785 cli_runner.go:211] docker network inspect dockerenv-252506 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:13:38.128249  430785 network_create.go:284] running [docker network inspect dockerenv-252506] to gather additional debugging logs...
	I1213 13:13:38.128263  430785 cli_runner.go:164] Run: docker network inspect dockerenv-252506
	W1213 13:13:38.145384  430785 cli_runner.go:211] docker network inspect dockerenv-252506 returned with exit code 1
	I1213 13:13:38.145407  430785 network_create.go:287] error running [docker network inspect dockerenv-252506]: docker network inspect dockerenv-252506: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-252506 not found
	I1213 13:13:38.145428  430785 network_create.go:289] output of [docker network inspect dockerenv-252506]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-252506 not found
	
	** /stderr **
	I1213 13:13:38.145550  430785 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:13:38.163432  430785 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ae9970}
	I1213 13:13:38.163467  430785 network_create.go:124] attempt to create docker network dockerenv-252506 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 13:13:38.163513  430785 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-252506 dockerenv-252506
	I1213 13:13:38.212535  430785 network_create.go:108] docker network dockerenv-252506 192.168.49.0/24 created
	I1213 13:13:38.212557  430785 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-252506" container
	I1213 13:13:38.212631  430785 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:13:38.229533  430785 cli_runner.go:164] Run: docker volume create dockerenv-252506 --label name.minikube.sigs.k8s.io=dockerenv-252506 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:13:38.248246  430785 oci.go:103] Successfully created a docker volume dockerenv-252506
	I1213 13:13:38.248338  430785 cli_runner.go:164] Run: docker run --rm --name dockerenv-252506-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-252506 --entrypoint /usr/bin/test -v dockerenv-252506:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:13:38.645501  430785 oci.go:107] Successfully prepared a docker volume dockerenv-252506
	I1213 13:13:38.645566  430785 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 13:13:38.645574  430785 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:13:38.645654  430785 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-252506:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:13:42.531530  430785 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-252506:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.885827212s)
	I1213 13:13:42.531553  430785 kic.go:203] duration metric: took 3.885975189s to extract preloaded images to volume ...
	W1213 13:13:42.531648  430785 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:13:42.531678  430785 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:13:42.531717  430785 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:13:42.587072  430785 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-252506 --name dockerenv-252506 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-252506 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-252506 --network dockerenv-252506 --ip 192.168.49.2 --volume dockerenv-252506:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 13:13:42.857016  430785 cli_runner.go:164] Run: docker container inspect dockerenv-252506 --format={{.State.Running}}
	I1213 13:13:42.876138  430785 cli_runner.go:164] Run: docker container inspect dockerenv-252506 --format={{.State.Status}}
	I1213 13:13:42.894426  430785 cli_runner.go:164] Run: docker exec dockerenv-252506 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:13:42.942009  430785 oci.go:144] the created container "dockerenv-252506" has a running status.
	I1213 13:13:42.942031  430785 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-401936/.minikube/machines/dockerenv-252506/id_rsa...
	I1213 13:13:43.029009  430785 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-401936/.minikube/machines/dockerenv-252506/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:13:43.053729  430785 cli_runner.go:164] Run: docker container inspect dockerenv-252506 --format={{.State.Status}}
	I1213 13:13:43.074263  430785 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:13:43.074279  430785 kic_runner.go:114] Args: [docker exec --privileged dockerenv-252506 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:13:43.119924  430785 cli_runner.go:164] Run: docker container inspect dockerenv-252506 --format={{.State.Status}}
	I1213 13:13:43.143367  430785 machine.go:94] provisionDockerMachine start ...
	I1213 13:13:43.143586  430785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-252506
	I1213 13:13:43.166837  430785 main.go:143] libmachine: Using SSH client type: native
	I1213 13:13:43.167107  430785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33157 <nil> <nil>}
	I1213 13:13:43.167113  430785 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:13:43.167771  430785 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52734->127.0.0.1:33157: read: connection reset by peer
	I1213 13:13:46.301370  430785 main.go:143] libmachine: SSH cmd err, output: <nil>: dockerenv-252506
	
	I1213 13:13:46.301388  430785 ubuntu.go:182] provisioning hostname "dockerenv-252506"
	I1213 13:13:46.301465  430785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-252506
	I1213 13:13:46.319939  430785 main.go:143] libmachine: Using SSH client type: native
	I1213 13:13:46.320168  430785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33157 <nil> <nil>}
	I1213 13:13:46.320175  430785 main.go:143] libmachine: About to run SSH command:
	sudo hostname dockerenv-252506 && echo "dockerenv-252506" | sudo tee /etc/hostname
	I1213 13:13:46.463217  430785 main.go:143] libmachine: SSH cmd err, output: <nil>: dockerenv-252506
	
	I1213 13:13:46.463282  430785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-252506
	I1213 13:13:46.481220  430785 main.go:143] libmachine: Using SSH client type: native
	I1213 13:13:46.481452  430785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33157 <nil> <nil>}
	I1213 13:13:46.481464  430785 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-252506' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-252506/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-252506' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:13:46.614058  430785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:13:46.614079  430785 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-401936/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-401936/.minikube}
	I1213 13:13:46.614107  430785 ubuntu.go:190] setting up certificates
	I1213 13:13:46.614119  430785 provision.go:84] configureAuth start
	I1213 13:13:46.614179  430785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-252506
	I1213 13:13:46.632017  430785 provision.go:143] copyHostCerts
	I1213 13:13:46.632068  430785 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-401936/.minikube/cert.pem, removing ...
	I1213 13:13:46.632076  430785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-401936/.minikube/cert.pem
	I1213 13:13:46.632148  430785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-401936/.minikube/cert.pem (1123 bytes)
	I1213 13:13:46.632248  430785 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-401936/.minikube/key.pem, removing ...
	I1213 13:13:46.632251  430785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-401936/.minikube/key.pem
	I1213 13:13:46.632275  430785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-401936/.minikube/key.pem (1675 bytes)
	I1213 13:13:46.632373  430785 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-401936/.minikube/ca.pem, removing ...
	I1213 13:13:46.632378  430785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.pem
	I1213 13:13:46.632406  430785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-401936/.minikube/ca.pem (1078 bytes)
	I1213 13:13:46.632476  430785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-401936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca-key.pem org=jenkins.dockerenv-252506 san=[127.0.0.1 192.168.49.2 dockerenv-252506 localhost minikube]
	I1213 13:13:46.729276  430785 provision.go:177] copyRemoteCerts
	I1213 13:13:46.729338  430785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:13:46.729382  430785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-252506
	I1213 13:13:46.748338  430785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/dockerenv-252506/id_rsa Username:docker}
	I1213 13:13:46.844622  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:13:46.863912  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:13:46.880732  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1213 13:13:46.897897  430785 provision.go:87] duration metric: took 283.755618ms to configureAuth
	I1213 13:13:46.897917  430785 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:13:46.898078  430785 config.go:182] Loaded profile config "dockerenv-252506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:13:46.898084  430785 machine.go:97] duration metric: took 3.75469118s to provisionDockerMachine
	I1213 13:13:46.898089  430785 client.go:176] duration metric: took 8.787425676s to LocalClient.Create
	I1213 13:13:46.898108  430785 start.go:167] duration metric: took 8.787476017s to libmachine.API.Create "dockerenv-252506"
	I1213 13:13:46.898114  430785 start.go:293] postStartSetup for "dockerenv-252506" (driver="docker")
	I1213 13:13:46.898125  430785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:13:46.898169  430785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:13:46.898203  430785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-252506
	I1213 13:13:46.916531  430785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/dockerenv-252506/id_rsa Username:docker}
	I1213 13:13:47.014736  430785 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:13:47.018380  430785 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:13:47.018396  430785 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:13:47.018406  430785 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-401936/.minikube/addons for local assets ...
	I1213 13:13:47.018470  430785 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-401936/.minikube/files for local assets ...
	I1213 13:13:47.018490  430785 start.go:296] duration metric: took 120.36763ms for postStartSetup
	I1213 13:13:47.018876  430785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-252506
	I1213 13:13:47.036911  430785 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/config.json ...
	I1213 13:13:47.037172  430785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:13:47.037212  430785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-252506
	I1213 13:13:47.055285  430785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/dockerenv-252506/id_rsa Username:docker}
	I1213 13:13:47.148766  430785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:13:47.153693  430785 start.go:128] duration metric: took 9.045252948s to createHost
	I1213 13:13:47.153712  430785 start.go:83] releasing machines lock for "dockerenv-252506", held for 9.045370451s
	I1213 13:13:47.153818  430785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-252506
	I1213 13:13:47.172292  430785 ssh_runner.go:195] Run: cat /version.json
	I1213 13:13:47.172358  430785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-252506
	I1213 13:13:47.172400  430785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:13:47.172474  430785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-252506
	I1213 13:13:47.191114  430785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/dockerenv-252506/id_rsa Username:docker}
	I1213 13:13:47.191555  430785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/dockerenv-252506/id_rsa Username:docker}
	I1213 13:13:47.343337  430785 ssh_runner.go:195] Run: systemctl --version
	I1213 13:13:47.350007  430785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:13:47.354817  430785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:13:47.354923  430785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:13:47.381130  430785 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:13:47.381145  430785 start.go:496] detecting cgroup driver to use...
	I1213 13:13:47.381179  430785 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:13:47.381230  430785 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 13:13:47.395381  430785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 13:13:47.408017  430785 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:13:47.408079  430785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:13:47.424428  430785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:13:47.441641  430785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:13:47.521159  430785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:13:47.608611  430785 docker.go:234] disabling docker service ...
	I1213 13:13:47.608666  430785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:13:47.627708  430785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:13:47.640607  430785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:13:47.721723  430785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:13:47.801825  430785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:13:47.814988  430785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:13:47.829185  430785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 13:13:47.839736  430785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 13:13:47.848840  430785 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1213 13:13:47.848892  430785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1213 13:13:47.857975  430785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 13:13:47.866913  430785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 13:13:47.875557  430785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 13:13:47.884083  430785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:13:47.891861  430785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 13:13:47.900358  430785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 13:13:47.908671  430785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 13:13:47.917826  430785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:13:47.926153  430785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:13:47.933855  430785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:13:48.013985  430785 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 13:13:48.117221  430785 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 13:13:48.117282  430785 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 13:13:48.121334  430785 start.go:564] Will wait 60s for crictl version
	I1213 13:13:48.121380  430785 ssh_runner.go:195] Run: which crictl
	I1213 13:13:48.124926  430785 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:13:48.149506  430785 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 13:13:48.149565  430785 ssh_runner.go:195] Run: containerd --version
	I1213 13:13:48.171812  430785 ssh_runner.go:195] Run: containerd --version
	I1213 13:13:48.195444  430785 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1213 13:13:48.196858  430785 cli_runner.go:164] Run: docker network inspect dockerenv-252506 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:13:48.213837  430785 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 13:13:48.218280  430785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:13:48.228492  430785 kubeadm.go:884] updating cluster {Name:dockerenv-252506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:dockerenv-252506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:13:48.228589  430785 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 13:13:48.228634  430785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:13:48.254360  430785 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 13:13:48.254373  430785 containerd.go:534] Images already preloaded, skipping extraction
	I1213 13:13:48.254422  430785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:13:48.279051  430785 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 13:13:48.279064  430785 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:13:48.279095  430785 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 containerd true true} ...
	I1213 13:13:48.279194  430785 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-252506 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:dockerenv-252506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:13:48.279244  430785 ssh_runner.go:195] Run: sudo crictl info
	I1213 13:13:48.304994  430785 cni.go:84] Creating CNI manager for ""
	I1213 13:13:48.305002  430785 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:13:48.305015  430785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:13:48.305036  430785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-252506 NodeName:dockerenv-252506 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:13:48.305165  430785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-252506"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:13:48.305225  430785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:13:48.313565  430785 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:13:48.313628  430785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:13:48.321695  430785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1213 13:13:48.335082  430785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:13:48.350481  430785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1213 13:13:48.363581  430785 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:13:48.367184  430785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:13:48.377094  430785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:13:48.456883  430785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:13:48.481645  430785 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506 for IP: 192.168.49.2
	I1213 13:13:48.481661  430785 certs.go:195] generating shared ca certs ...
	I1213 13:13:48.481679  430785 certs.go:227] acquiring lock for ca certs: {Name:mk638ad0c55891f03a1600a7ef1d632862f1d7c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:13:48.481848  430785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.key
	I1213 13:13:48.481906  430785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.key
	I1213 13:13:48.481913  430785 certs.go:257] generating profile certs ...
	I1213 13:13:48.481980  430785 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/client.key
	I1213 13:13:48.481998  430785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/client.crt with IP's: []
	I1213 13:13:48.608979  430785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/client.crt ...
	I1213 13:13:48.608997  430785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/client.crt: {Name:mk834cc3223b7db72471ef1639d6c5346f3966a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:13:48.609194  430785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/client.key ...
	I1213 13:13:48.609202  430785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/client.key: {Name:mk186d0284ac9c92525aa6f38574e417043045f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:13:48.609286  430785 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.key.aac13501
	I1213 13:13:48.609296  430785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.crt.aac13501 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 13:13:48.772737  430785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.crt.aac13501 ...
	I1213 13:13:48.772755  430785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.crt.aac13501: {Name:mk40777f6e34c2584a061e064ff9d89f91dacbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:13:48.772938  430785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.key.aac13501 ...
	I1213 13:13:48.772950  430785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.key.aac13501: {Name:mk5925eb758ba5ce1861ee5a0d8191ab34226efb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:13:48.773023  430785 certs.go:382] copying /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.crt.aac13501 -> /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.crt
	I1213 13:13:48.773094  430785 certs.go:386] copying /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.key.aac13501 -> /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.key
	I1213 13:13:48.773142  430785 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/proxy-client.key
	I1213 13:13:48.773162  430785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/proxy-client.crt with IP's: []
	I1213 13:13:48.801813  430785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/proxy-client.crt ...
	I1213 13:13:48.801829  430785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/proxy-client.crt: {Name:mk725c45b1b13313d68a0b06d541ab0421b19b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:13:48.801992  430785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/proxy-client.key ...
	I1213 13:13:48.802007  430785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/proxy-client.key: {Name:mk7941a870474bd44b41c7c24cecf51b74d820fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:13:48.802190  430785 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:13:48.802222  430785 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:13:48.802243  430785 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:13:48.802263  430785 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/key.pem (1675 bytes)
	I1213 13:13:48.802806  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:13:48.821039  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 13:13:48.837849  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:13:48.854664  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 13:13:48.871362  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:13:48.888392  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:13:48.905240  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:13:48.922339  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/dockerenv-252506/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:13:48.939698  430785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:13:48.959664  430785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:13:48.972247  430785 ssh_runner.go:195] Run: openssl version
	I1213 13:13:48.978485  430785 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:13:48.985917  430785 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:13:48.995600  430785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:13:48.999295  430785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:13:48.999351  430785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:13:49.033248  430785 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:13:49.040960  430785 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:13:49.048120  430785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:13:49.051848  430785 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:13:49.051909  430785 kubeadm.go:401] StartCluster: {Name:dockerenv-252506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:dockerenv-252506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:13:49.051982  430785 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 13:13:49.052042  430785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:13:49.079544  430785 cri.go:89] found id: ""
	I1213 13:13:49.079618  430785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:13:49.087994  430785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:13:49.096051  430785 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:13:49.096108  430785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:13:49.103653  430785 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:13:49.103666  430785 kubeadm.go:158] found existing configuration files:
	
	I1213 13:13:49.103720  430785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:13:49.111261  430785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:13:49.111303  430785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:13:49.118576  430785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:13:49.126204  430785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:13:49.126260  430785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:13:49.133994  430785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:13:49.141734  430785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:13:49.141778  430785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:13:49.149166  430785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:13:49.156706  430785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:13:49.156758  430785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:13:49.164054  430785 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:13:49.201192  430785 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 13:13:49.201265  430785 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:13:49.222122  430785 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:13:49.222217  430785 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:13:49.222265  430785 kubeadm.go:319] OS: Linux
	I1213 13:13:49.222340  430785 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:13:49.222402  430785 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:13:49.222464  430785 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:13:49.222500  430785 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:13:49.222559  430785 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:13:49.222640  430785 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:13:49.222706  430785 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:13:49.222766  430785 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:13:49.281264  430785 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:13:49.281415  430785 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:13:49.281532  430785 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:13:49.286499  430785 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:13:49.288822  430785 out.go:252]   - Generating certificates and keys ...
	I1213 13:13:49.288899  430785 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:13:49.288989  430785 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:13:49.802952  430785 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:13:50.071419  430785 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:13:50.398543  430785 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:13:50.486228  430785 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:13:50.569962  430785 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:13:50.570078  430785 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [dockerenv-252506 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 13:13:51.046532  430785 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:13:51.046640  430785 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-252506 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 13:13:51.270002  430785 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:13:51.316277  430785 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:13:51.400043  430785 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:13:51.400140  430785 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:13:51.631607  430785 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:13:51.796468  430785 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:13:52.155195  430785 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:13:52.428124  430785 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:13:53.113404  430785 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:13:53.113859  430785 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:13:53.117834  430785 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:13:53.119510  430785 out.go:252]   - Booting up control plane ...
	I1213 13:13:53.119623  430785 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:13:53.119748  430785 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:13:53.121409  430785 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:13:53.136129  430785 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:13:53.136278  430785 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:13:53.142934  430785 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:13:53.143203  430785 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:13:53.143244  430785 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:13:53.242891  430785 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:13:53.242996  430785 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 13:13:54.743826  430785 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501064388s
	I1213 13:13:54.746526  430785 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:13:54.746609  430785 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 13:13:54.746723  430785 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:13:54.746864  430785 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:13:55.925456  430785 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.178763614s
	I1213 13:13:56.617763  430785 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.871240944s
	I1213 13:13:58.248355  430785 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501699583s
	I1213 13:13:58.265390  430785 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 13:13:58.275568  430785 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 13:13:58.284333  430785 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 13:13:58.284629  430785 kubeadm.go:319] [mark-control-plane] Marking the node dockerenv-252506 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 13:13:58.291802  430785 kubeadm.go:319] [bootstrap-token] Using token: slq54u.fm1y9n7wh84c4v1s
	I1213 13:13:58.293290  430785 out.go:252]   - Configuring RBAC rules ...
	I1213 13:13:58.293471  430785 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 13:13:58.296393  430785 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 13:13:58.301485  430785 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 13:13:58.304516  430785 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 13:13:58.306843  430785 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 13:13:58.309109  430785 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 13:13:58.653449  430785 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 13:13:59.070534  430785 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 13:13:59.654406  430785 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 13:13:59.655390  430785 kubeadm.go:319] 
	I1213 13:13:59.655478  430785 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 13:13:59.655483  430785 kubeadm.go:319] 
	I1213 13:13:59.655579  430785 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 13:13:59.655584  430785 kubeadm.go:319] 
	I1213 13:13:59.655615  430785 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 13:13:59.655697  430785 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 13:13:59.655766  430785 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 13:13:59.655770  430785 kubeadm.go:319] 
	I1213 13:13:59.655843  430785 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 13:13:59.655847  430785 kubeadm.go:319] 
	I1213 13:13:59.655906  430785 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 13:13:59.655910  430785 kubeadm.go:319] 
	I1213 13:13:59.655995  430785 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 13:13:59.656123  430785 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 13:13:59.656228  430785 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 13:13:59.656233  430785 kubeadm.go:319] 
	I1213 13:13:59.656372  430785 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 13:13:59.656445  430785 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 13:13:59.656448  430785 kubeadm.go:319] 
	I1213 13:13:59.656540  430785 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token slq54u.fm1y9n7wh84c4v1s \
	I1213 13:13:59.656679  430785 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:05d8a85c1b2761169b95534d93c81e4c18e60369e201d73b5567ad02426dd2e0 \
	I1213 13:13:59.656707  430785 kubeadm.go:319] 	--control-plane 
	I1213 13:13:59.656711  430785 kubeadm.go:319] 
	I1213 13:13:59.656809  430785 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 13:13:59.656816  430785 kubeadm.go:319] 
	I1213 13:13:59.656922  430785 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token slq54u.fm1y9n7wh84c4v1s \
	I1213 13:13:59.657067  430785 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:05d8a85c1b2761169b95534d93c81e4c18e60369e201d73b5567ad02426dd2e0 
	I1213 13:13:59.659501  430785 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:13:59.659619  430785 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:13:59.659647  430785 cni.go:84] Creating CNI manager for ""
	I1213 13:13:59.659656  430785 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:13:59.662115  430785 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 13:13:59.663250  430785 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 13:13:59.667837  430785 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 13:13:59.667853  430785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 13:13:59.680916  430785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 13:13:59.895429  430785 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 13:13:59.895526  430785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:13:59.895526  430785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-252506 minikube.k8s.io/updated_at=2025_12_13T13_13_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=dockerenv-252506 minikube.k8s.io/primary=true
	I1213 13:13:59.906104  430785 ops.go:34] apiserver oom_adj: -16
	I1213 13:13:59.981019  430785 kubeadm.go:1114] duration metric: took 85.557663ms to wait for elevateKubeSystemPrivileges
	I1213 13:13:59.981048  430785 kubeadm.go:403] duration metric: took 10.929149347s to StartCluster
	I1213 13:13:59.981071  430785 settings.go:142] acquiring lock: {Name:mk71afd6e9758cc52371589a74f73214557044d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:13:59.981171  430785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:13:59.981941  430785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/kubeconfig: {Name:mk743b5761bd946614fa12c7aa179660c36f36c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:13:59.982184  430785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 13:13:59.982186  430785 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 13:13:59.982266  430785 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:13:59.982377  430785 addons.go:70] Setting storage-provisioner=true in profile "dockerenv-252506"
	I1213 13:13:59.982397  430785 addons.go:239] Setting addon storage-provisioner=true in "dockerenv-252506"
	I1213 13:13:59.982428  430785 host.go:66] Checking if "dockerenv-252506" exists ...
	I1213 13:13:59.982436  430785 config.go:182] Loaded profile config "dockerenv-252506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:13:59.982449  430785 addons.go:70] Setting default-storageclass=true in profile "dockerenv-252506"
	I1213 13:13:59.982478  430785 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-252506"
	I1213 13:13:59.982859  430785 cli_runner.go:164] Run: docker container inspect dockerenv-252506 --format={{.State.Status}}
	I1213 13:13:59.982961  430785 cli_runner.go:164] Run: docker container inspect dockerenv-252506 --format={{.State.Status}}
	I1213 13:13:59.983908  430785 out.go:179] * Verifying Kubernetes components...
	I1213 13:13:59.985556  430785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:14:00.007838  430785 addons.go:239] Setting addon default-storageclass=true in "dockerenv-252506"
	I1213 13:14:00.007876  430785 host.go:66] Checking if "dockerenv-252506" exists ...
	I1213 13:14:00.008337  430785 cli_runner.go:164] Run: docker container inspect dockerenv-252506 --format={{.State.Status}}
	I1213 13:14:00.008471  430785 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:14:00.009733  430785 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:14:00.009744  430785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:14:00.009797  430785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-252506
	I1213 13:14:00.037940  430785 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:14:00.037955  430785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:14:00.038017  430785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-252506
	I1213 13:14:00.038661  430785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/dockerenv-252506/id_rsa Username:docker}
	I1213 13:14:00.059461  430785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/dockerenv-252506/id_rsa Username:docker}
	I1213 13:14:00.075749  430785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 13:14:00.136431  430785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:14:00.151258  430785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:14:00.166460  430785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:14:00.228416  430785 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 13:14:00.230199  430785 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:14:00.230244  430785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:14:00.410948  430785 api_server.go:72] duration metric: took 428.731773ms to wait for apiserver process to appear ...
	I1213 13:14:00.410967  430785 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:14:00.410986  430785 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 13:14:00.417188  430785 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 13:14:00.417983  430785 api_server.go:141] control plane version: v1.34.2
	I1213 13:14:00.417998  430785 api_server.go:131] duration metric: took 7.026271ms to wait for apiserver health ...
	I1213 13:14:00.418007  430785 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:14:00.420553  430785 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 13:14:00.420654  430785 system_pods.go:59] 5 kube-system pods found
	I1213 13:14:00.420672  430785 system_pods.go:61] "etcd-dockerenv-252506" [7dee7429-2426-432f-b52e-fe3fc8db850e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:14:00.420679  430785 system_pods.go:61] "kube-apiserver-dockerenv-252506" [7dc0659b-ade3-408a-a750-601b55982d0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:14:00.420685  430785 system_pods.go:61] "kube-controller-manager-dockerenv-252506" [a102750a-d5c2-47d5-8d48-3ded9dcadf9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:14:00.420689  430785 system_pods.go:61] "kube-scheduler-dockerenv-252506" [22dab315-8197-4450-92fe-9f75e8129674] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:14:00.420695  430785 system_pods.go:61] "storage-provisioner" [f40ec169-9283-4a1a-b35d-b5e78961833c] Pending
	I1213 13:14:00.420701  430785 system_pods.go:74] duration metric: took 2.687957ms to wait for pod list to return data ...
	I1213 13:14:00.420711  430785 kubeadm.go:587] duration metric: took 438.498806ms to wait for: map[apiserver:true system_pods:true]
	I1213 13:14:00.420722  430785 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:14:00.421702  430785 addons.go:530] duration metric: took 439.43793ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 13:14:00.422838  430785 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:14:00.422854  430785 node_conditions.go:123] node cpu capacity is 8
	I1213 13:14:00.422873  430785 node_conditions.go:105] duration metric: took 2.146089ms to run NodePressure ...
	I1213 13:14:00.422886  430785 start.go:242] waiting for startup goroutines ...
	I1213 13:14:00.732170  430785 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-252506" context rescaled to 1 replicas
	I1213 13:14:00.732201  430785 start.go:247] waiting for cluster config update ...
	I1213 13:14:00.732217  430785 start.go:256] writing updated cluster config ...
	I1213 13:14:00.732516  430785 ssh_runner.go:195] Run: rm -f paused
	I1213 13:14:00.779619  430785 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:14:00.781574  430785 out.go:179] * Done! kubectl is now configured to use "dockerenv-252506" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	c503c1293b08d       409467f978b4a       10 seconds ago      Running             kindnet-cni               0                   83ff14b0d7889       kindnet-4924b                              kube-system
	e0bc1d77ab018       8aa150647e88a       10 seconds ago      Running             kube-proxy                0                   843ffb3c428d4       kube-proxy-s5vgs                           kube-system
	1647811cfe86f       88320b5498ff2       20 seconds ago      Running             kube-scheduler            0                   027444e787b96       kube-scheduler-dockerenv-252506            kube-system
	f057bc0663506       01e8bacf0f500       20 seconds ago      Running             kube-controller-manager   0                   515deadd78d1c       kube-controller-manager-dockerenv-252506   kube-system
	caddbfa5d9774       a5f569d49a979       20 seconds ago      Running             kube-apiserver            0                   aacc1a4293941       kube-apiserver-dockerenv-252506            kube-system
	82e0c2a9046e5       a3e246e9556e9       20 seconds ago      Running             etcd                      0                   b76800127004e       etcd-dockerenv-252506                      kube-system
	
	
	==> containerd <==
	Dec 13 13:14:05 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:05.040525671Z" level=info msg="StartContainer for \"e0bc1d77ab0181c91cf9d9d58d9b358b9fca3a0f97b8e6592ddfae0eed4fec0e\""
	Dec 13 13:14:05 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:05.042311747Z" level=info msg="connecting to shim e0bc1d77ab0181c91cf9d9d58d9b358b9fca3a0f97b8e6592ddfae0eed4fec0e" address="unix:///run/containerd/s/6e6dcc2c73475a6d2b0be5acdd41b5794691e9495dac7bc0aa70bcbc0502c5db" protocol=ttrpc version=3
	Dec 13 13:14:05 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:05.111276238Z" level=info msg="StartContainer for \"e0bc1d77ab0181c91cf9d9d58d9b358b9fca3a0f97b8e6592ddfae0eed4fec0e\" returns successfully"
	Dec 13 13:14:05 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:05.296084554Z" level=info msg="RunPodSandbox for name:\"kindnet-4924b\"  uid:\"1511ecb8-68fd-4c14-af7d-a51c2ad4294c\"  namespace:\"kube-system\" returns sandbox id \"83ff14b0d78899b5968e1dfd8702566ab67b3900a78032727f23dada8e5b7add\""
	Dec 13 13:14:05 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:05.302354039Z" level=info msg="CreateContainer within sandbox \"83ff14b0d78899b5968e1dfd8702566ab67b3900a78032727f23dada8e5b7add\" for container name:\"kindnet-cni\""
	Dec 13 13:14:05 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:05.309002469Z" level=info msg="Container c503c1293b08d4dc6e1a2e529c67bf037c8b1fe6130196a34ab9cdb2666c508b: CDI devices from CRI Config.CDIDevices: []"
	Dec 13 13:14:05 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:05.316076871Z" level=info msg="CreateContainer within sandbox \"83ff14b0d78899b5968e1dfd8702566ab67b3900a78032727f23dada8e5b7add\" for name:\"kindnet-cni\" returns container id \"c503c1293b08d4dc6e1a2e529c67bf037c8b1fe6130196a34ab9cdb2666c508b\""
	Dec 13 13:14:05 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:05.316653840Z" level=info msg="StartContainer for \"c503c1293b08d4dc6e1a2e529c67bf037c8b1fe6130196a34ab9cdb2666c508b\""
	Dec 13 13:14:05 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:05.317465187Z" level=info msg="connecting to shim c503c1293b08d4dc6e1a2e529c67bf037c8b1fe6130196a34ab9cdb2666c508b" address="unix:///run/containerd/s/8ed897f68e6f76502c211252f240c13430fc9df4b5caea1ade1f84a0612c28dd" protocol=ttrpc version=3
	Dec 13 13:14:05 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:05.414137199Z" level=info msg="StartContainer for \"c503c1293b08d4dc6e1a2e529c67bf037c8b1fe6130196a34ab9cdb2666c508b\" returns successfully"
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.948449513Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod641f2c2e45e8b8f7ac209f9d41079599.slice/cri-containerd-82e0c2a9046e51b0c87c4fd9091f57046e27ffacb7128ebc53052f54a248bf14.scope/hugetlb.2MB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.948536056Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod641f2c2e45e8b8f7ac209f9d41079599.slice/cri-containerd-82e0c2a9046e51b0c87c4fd9091f57046e27ffacb7128ebc53052f54a248bf14.scope/hugetlb.1GB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.949253024Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9580353d8421aa6e27df2a472443aacd.slice/cri-containerd-1647811cfe86fb3099e1a52d78dfd3a67e5fa7d7c58da58502e4497a19a16703.scope/hugetlb.2MB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.949350130Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9580353d8421aa6e27df2a472443aacd.slice/cri-containerd-1647811cfe86fb3099e1a52d78dfd3a67e5fa7d7c58da58502e4497a19a16703.scope/hugetlb.1GB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.950095288Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfac09e89_bec1_4aff_8d6e_d38798fb296d.slice/cri-containerd-e0bc1d77ab0181c91cf9d9d58d9b358b9fca3a0f97b8e6592ddfae0eed4fec0e.scope/hugetlb.2MB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.950239361Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfac09e89_bec1_4aff_8d6e_d38798fb296d.slice/cri-containerd-e0bc1d77ab0181c91cf9d9d58d9b358b9fca3a0f97b8e6592ddfae0eed4fec0e.scope/hugetlb.1GB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.951405120Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a02c164a12c05099b674d1914be98b6.slice/cri-containerd-caddbfa5d97747cbe51ba129626dacb777ef5f4b878f744556b485478c5f4dac.scope/hugetlb.2MB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.951488356Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a02c164a12c05099b674d1914be98b6.slice/cri-containerd-caddbfa5d97747cbe51ba129626dacb777ef5f4b878f744556b485478c5f4dac.scope/hugetlb.1GB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.952112082Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52db810b1488562eb12f73873fb71441.slice/cri-containerd-f057bc0663506d01acd991d32f37f840f69de38a3a6cb24dc87382f3f0613d4c.scope/hugetlb.2MB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.952187713Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52db810b1488562eb12f73873fb71441.slice/cri-containerd-f057bc0663506d01acd991d32f37f840f69de38a3a6cb24dc87382f3f0613d4c.scope/hugetlb.1GB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.953131142Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod1511ecb8_68fd_4c14_af7d_a51c2ad4294c.slice/cri-containerd-c503c1293b08d4dc6e1a2e529c67bf037c8b1fe6130196a34ab9cdb2666c508b.scope/hugetlb.2MB.events\""
	Dec 13 13:14:08 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:08.953311405Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod1511ecb8_68fd_4c14_af7d_a51c2ad4294c.slice/cri-containerd-c503c1293b08d4dc6e1a2e529c67bf037c8b1fe6130196a34ab9cdb2666c508b.scope/hugetlb.1GB.events\""
	Dec 13 13:14:15 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:15.795751963Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 13:14:15 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:15.795858340Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 13:14:15 dockerenv-252506 containerd[659]: time="2025-12-13T13:14:15.795927243Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	
	
	==> describe nodes <==
	Name:               dockerenv-252506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-252506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=dockerenv-252506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_13_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:13:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-252506
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:14:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:14:15 +0000   Sat, 13 Dec 2025 13:13:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:14:15 +0000   Sat, 13 Dec 2025 13:13:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:14:15 +0000   Sat, 13 Dec 2025 13:13:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:14:15 +0000   Sat, 13 Dec 2025 13:14:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-252506
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                ae09bb71-6e73-4c87-9fe7-d4b2501e263b
	  Boot ID:                    90a4a0ca-634d-4c7c-8727-6b2f644cc467
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-nks5c                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12s
	  kube-system                 etcd-dockerenv-252506                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17s
	  kube-system                 kindnet-4924b                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12s
	  kube-system                 kube-apiserver-dockerenv-252506             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-controller-manager-dockerenv-252506    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-proxy-s5vgs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 kube-scheduler-dockerenv-252506             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10s   kube-proxy       
	  Normal  Starting                 18s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17s   kubelet          Node dockerenv-252506 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s   kubelet          Node dockerenv-252506 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s   kubelet          Node dockerenv-252506 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s   node-controller  Node dockerenv-252506 event: Registered Node dockerenv-252506 in Controller
	  Normal  NodeReady                1s    kubelet          Node dockerenv-252506 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[ +15.550392] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 5b b2 4e f6 0c 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[Dec13 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	[  +0.000156] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 2b b1 e9 34 e9 08 06
	[  +9.601084] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[  +6.680640] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 7a 15 04 2e f9 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 26 9c 63 03 a8 a5 08 06
	[  +0.000500] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e bf e9 59 0c fc 08 06
	[ +14.220693] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 6b 48 e9 3e 65 08 06
	[  +0.000354] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[ +17.192216] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 ce b1 a0 1c 7b 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	
	
	==> etcd [82e0c2a9046e51b0c87c4fd9091f57046e27ffacb7128ebc53052f54a248bf14] <==
	{"level":"warn","ts":"2025-12-13T13:13:56.016005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.024094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.030437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.036852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.043403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.049650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.055912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.062098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.068630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.077419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.083446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.090474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.097048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.103575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.110095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.117029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.123398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.129853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.136862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.143270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.149789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.164149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.170954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.177493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:13:56.224503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59010","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:14:16 up  1:56,  0 user,  load average: 0.37, 0.54, 1.02
	Linux dockerenv-252506 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c503c1293b08d4dc6e1a2e529c67bf037c8b1fe6130196a34ab9cdb2666c508b] <==
	I1213 13:14:05.681455       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:14:05.681801       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1213 13:14:05.681960       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:14:05.681976       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:14:05.682002       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:14:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:14:05.793946       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:14:05.881111       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:14:05.881276       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:14:05.881607       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:14:06.181641       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:14:06.181674       1 metrics.go:72] Registering metrics
	I1213 13:14:06.181752       1 controller.go:711] "Syncing nftables rules"
	I1213 13:14:15.795074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:14:15.795243       1 main.go:301] handling current node
	
	
	==> kube-apiserver [caddbfa5d97747cbe51ba129626dacb777ef5f4b878f744556b485478c5f4dac] <==
	I1213 13:13:56.671300       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 13:13:56.672712       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:13:56.675056       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:13:56.675099       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 13:13:56.699608       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:13:56.700057       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 13:13:56.850195       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:13:57.575457       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 13:13:57.580139       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 13:13:57.580152       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:13:58.008604       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:13:58.043954       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:13:58.179758       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 13:13:58.185475       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1213 13:13:58.186471       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:13:58.190894       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:13:58.606695       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:13:59.059352       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:13:59.069572       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 13:13:59.076959       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 13:14:04.310040       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:14:04.313787       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:14:04.608856       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1213 13:14:04.608856       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1213 13:14:04.709719       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f057bc0663506d01acd991d32f37f840f69de38a3a6cb24dc87382f3f0613d4c] <==
	I1213 13:14:03.606602       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 13:14:03.606658       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 13:14:03.606715       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 13:14:03.606730       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 13:14:03.606736       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:14:03.606804       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 13:14:03.606812       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 13:14:03.606736       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 13:14:03.606754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 13:14:03.607122       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 13:14:03.607154       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 13:14:03.607221       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 13:14:03.607239       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 13:14:03.607249       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 13:14:03.607594       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 13:14:03.607818       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 13:14:03.607921       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 13:14:03.608071       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="dockerenv-252506"
	I1213 13:14:03.608130       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 13:14:03.610169       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 13:14:03.611430       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 13:14:03.611444       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:14:03.615842       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 13:14:03.622035       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 13:14:03.628526       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e0bc1d77ab0181c91cf9d9d58d9b358b9fca3a0f97b8e6592ddfae0eed4fec0e] <==
	I1213 13:14:05.140459       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:14:05.210109       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:14:05.311242       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:14:05.311311       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 13:14:05.311431       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:14:05.334591       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:14:05.334650       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:14:05.340678       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:14:05.341081       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:14:05.341122       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:14:05.342343       1 config.go:200] "Starting service config controller"
	I1213 13:14:05.342373       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:14:05.342402       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:14:05.342411       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:14:05.342460       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:14:05.342468       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:14:05.342526       1 config.go:309] "Starting node config controller"
	I1213 13:14:05.342532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:14:05.342537       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:14:05.442587       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:14:05.442611       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:14:05.442645       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1647811cfe86fb3099e1a52d78dfd3a67e5fa7d7c58da58502e4497a19a16703] <==
	E1213 13:13:56.616245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 13:13:56.616256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:13:56.616285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:13:56.616380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:13:56.616449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:13:56.616449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:13:56.616454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:13:56.616536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:13:56.616555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:13:56.616685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:13:56.616691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:13:57.437686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:13:57.481749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:13:57.481869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:13:57.503624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:13:57.567562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:13:57.578748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:13:57.599984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:13:57.689667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:13:57.705730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:13:57.754243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 13:13:57.792527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:13:57.817808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:13:57.836942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1213 13:14:00.613765       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:13:59 dockerenv-252506 kubelet[1423]: I1213 13:13:59.919670    1423 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-dockerenv-252506"
	Dec 13 13:13:59 dockerenv-252506 kubelet[1423]: E1213 13:13:59.925029    1423 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-dockerenv-252506\" already exists" pod="kube-system/etcd-dockerenv-252506"
	Dec 13 13:13:59 dockerenv-252506 kubelet[1423]: E1213 13:13:59.926093    1423 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-dockerenv-252506\" already exists" pod="kube-system/kube-scheduler-dockerenv-252506"
	Dec 13 13:13:59 dockerenv-252506 kubelet[1423]: E1213 13:13:59.927010    1423 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-dockerenv-252506\" already exists" pod="kube-system/kube-apiserver-dockerenv-252506"
	Dec 13 13:13:59 dockerenv-252506 kubelet[1423]: I1213 13:13:59.952017    1423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-dockerenv-252506" podStartSLOduration=0.951993067 podStartE2EDuration="951.993067ms" podCreationTimestamp="2025-12-13 13:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:13:59.940936397 +0000 UTC m=+1.121275100" watchObservedRunningTime="2025-12-13 13:13:59.951993067 +0000 UTC m=+1.132331763"
	Dec 13 13:13:59 dockerenv-252506 kubelet[1423]: I1213 13:13:59.965219    1423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-dockerenv-252506" podStartSLOduration=0.965201235 podStartE2EDuration="965.201235ms" podCreationTimestamp="2025-12-13 13:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:13:59.952253188 +0000 UTC m=+1.132591882" watchObservedRunningTime="2025-12-13 13:13:59.965201235 +0000 UTC m=+1.145539941"
	Dec 13 13:13:59 dockerenv-252506 kubelet[1423]: I1213 13:13:59.975434    1423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-dockerenv-252506" podStartSLOduration=0.975415985 podStartE2EDuration="975.415985ms" podCreationTimestamp="2025-12-13 13:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:13:59.965380623 +0000 UTC m=+1.145719342" watchObservedRunningTime="2025-12-13 13:13:59.975415985 +0000 UTC m=+1.155754685"
	Dec 13 13:13:59 dockerenv-252506 kubelet[1423]: I1213 13:13:59.975551    1423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-dockerenv-252506" podStartSLOduration=0.975546329 podStartE2EDuration="975.546329ms" podCreationTimestamp="2025-12-13 13:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:13:59.975368392 +0000 UTC m=+1.155707083" watchObservedRunningTime="2025-12-13 13:13:59.975546329 +0000 UTC m=+1.155885027"
	Dec 13 13:14:03 dockerenv-252506 kubelet[1423]: I1213 13:14:03.645939    1423 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 13:14:03 dockerenv-252506 kubelet[1423]: I1213 13:14:03.646610    1423 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 13:14:04 dockerenv-252506 kubelet[1423]: I1213 13:14:04.724682    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fac09e89-bec1-4aff-8d6e-d38798fb296d-kube-proxy\") pod \"kube-proxy-s5vgs\" (UID: \"fac09e89-bec1-4aff-8d6e-d38798fb296d\") " pod="kube-system/kube-proxy-s5vgs"
	Dec 13 13:14:04 dockerenv-252506 kubelet[1423]: I1213 13:14:04.724759    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1511ecb8-68fd-4c14-af7d-a51c2ad4294c-xtables-lock\") pod \"kindnet-4924b\" (UID: \"1511ecb8-68fd-4c14-af7d-a51c2ad4294c\") " pod="kube-system/kindnet-4924b"
	Dec 13 13:14:04 dockerenv-252506 kubelet[1423]: I1213 13:14:04.724786    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64jpc\" (UniqueName: \"kubernetes.io/projected/1511ecb8-68fd-4c14-af7d-a51c2ad4294c-kube-api-access-64jpc\") pod \"kindnet-4924b\" (UID: \"1511ecb8-68fd-4c14-af7d-a51c2ad4294c\") " pod="kube-system/kindnet-4924b"
	Dec 13 13:14:04 dockerenv-252506 kubelet[1423]: I1213 13:14:04.724821    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fac09e89-bec1-4aff-8d6e-d38798fb296d-xtables-lock\") pod \"kube-proxy-s5vgs\" (UID: \"fac09e89-bec1-4aff-8d6e-d38798fb296d\") " pod="kube-system/kube-proxy-s5vgs"
	Dec 13 13:14:04 dockerenv-252506 kubelet[1423]: I1213 13:14:04.724842    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ww2w\" (UniqueName: \"kubernetes.io/projected/fac09e89-bec1-4aff-8d6e-d38798fb296d-kube-api-access-7ww2w\") pod \"kube-proxy-s5vgs\" (UID: \"fac09e89-bec1-4aff-8d6e-d38798fb296d\") " pod="kube-system/kube-proxy-s5vgs"
	Dec 13 13:14:04 dockerenv-252506 kubelet[1423]: I1213 13:14:04.724867    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1511ecb8-68fd-4c14-af7d-a51c2ad4294c-cni-cfg\") pod \"kindnet-4924b\" (UID: \"1511ecb8-68fd-4c14-af7d-a51c2ad4294c\") " pod="kube-system/kindnet-4924b"
	Dec 13 13:14:04 dockerenv-252506 kubelet[1423]: I1213 13:14:04.724888    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fac09e89-bec1-4aff-8d6e-d38798fb296d-lib-modules\") pod \"kube-proxy-s5vgs\" (UID: \"fac09e89-bec1-4aff-8d6e-d38798fb296d\") " pod="kube-system/kube-proxy-s5vgs"
	Dec 13 13:14:04 dockerenv-252506 kubelet[1423]: I1213 13:14:04.724908    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1511ecb8-68fd-4c14-af7d-a51c2ad4294c-lib-modules\") pod \"kindnet-4924b\" (UID: \"1511ecb8-68fd-4c14-af7d-a51c2ad4294c\") " pod="kube-system/kindnet-4924b"
	Dec 13 13:14:05 dockerenv-252506 kubelet[1423]: I1213 13:14:05.940236    1423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4924b" podStartSLOduration=1.940212879 podStartE2EDuration="1.940212879s" podCreationTimestamp="2025-12-13 13:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:14:05.939913877 +0000 UTC m=+7.120252579" watchObservedRunningTime="2025-12-13 13:14:05.940212879 +0000 UTC m=+7.120551581"
	Dec 13 13:14:05 dockerenv-252506 kubelet[1423]: I1213 13:14:05.950019    1423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s5vgs" podStartSLOduration=1.949996055 podStartE2EDuration="1.949996055s" podCreationTimestamp="2025-12-13 13:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:14:05.949831573 +0000 UTC m=+7.130170277" watchObservedRunningTime="2025-12-13 13:14:05.949996055 +0000 UTC m=+7.130334755"
	Dec 13 13:14:15 dockerenv-252506 kubelet[1423]: I1213 13:14:15.813186    1423 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 13:14:15 dockerenv-252506 kubelet[1423]: I1213 13:14:15.905867    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c36392ca-9d08-4e2e-b504-ae2e99d9a787-config-volume\") pod \"coredns-66bc5c9577-nks5c\" (UID: \"c36392ca-9d08-4e2e-b504-ae2e99d9a787\") " pod="kube-system/coredns-66bc5c9577-nks5c"
	Dec 13 13:14:15 dockerenv-252506 kubelet[1423]: I1213 13:14:15.905921    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f668p\" (UniqueName: \"kubernetes.io/projected/c36392ca-9d08-4e2e-b504-ae2e99d9a787-kube-api-access-f668p\") pod \"coredns-66bc5c9577-nks5c\" (UID: \"c36392ca-9d08-4e2e-b504-ae2e99d9a787\") " pod="kube-system/coredns-66bc5c9577-nks5c"
	Dec 13 13:14:15 dockerenv-252506 kubelet[1423]: I1213 13:14:15.905958    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f40ec169-9283-4a1a-b35d-b5e78961833c-tmp\") pod \"storage-provisioner\" (UID: \"f40ec169-9283-4a1a-b35d-b5e78961833c\") " pod="kube-system/storage-provisioner"
	Dec 13 13:14:15 dockerenv-252506 kubelet[1423]: I1213 13:14:15.905980    1423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fgj\" (UniqueName: \"kubernetes.io/projected/f40ec169-9283-4a1a-b35d-b5e78961833c-kube-api-access-b4fgj\") pod \"storage-provisioner\" (UID: \"f40ec169-9283-4a1a-b35d-b5e78961833c\") " pod="kube-system/storage-provisioner"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-252506 -n dockerenv-252506
helpers_test.go:270: (dbg) Run:  kubectl --context dockerenv-252506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-66bc5c9577-nks5c storage-provisioner
helpers_test.go:283: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context dockerenv-252506 describe pod coredns-66bc5c9577-nks5c storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context dockerenv-252506 describe pod coredns-66bc5c9577-nks5c storage-provisioner: exit status 1 (56.889765ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-nks5c" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context dockerenv-252506 describe pod coredns-66bc5c9577-nks5c storage-provisioner: exit status 1
helpers_test.go:176: Cleaning up "dockerenv-252506" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-252506
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-252506: (2.400040558s)
--- FAIL: TestDockerEnvContainerd (41.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-217219 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-217219 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-217219 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-217219 --alsologtostderr -v=1] stderr:
I1213 13:16:56.054541  455164 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:56.054658  455164 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:56.054666  455164 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:56.054671  455164 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:56.054906  455164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:16:56.055167  455164 mustload.go:66] Loading cluster: functional-217219
I1213 13:16:56.055574  455164 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:16:56.055950  455164 cli_runner.go:164] Run: docker container inspect functional-217219 --format={{.State.Status}}
I1213 13:16:56.074383  455164 host.go:66] Checking if "functional-217219" exists ...
I1213 13:16:56.074657  455164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 13:16:56.128883  455164 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:16:56.119271108 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1213 13:16:56.128986  455164 api_server.go:166] Checking apiserver status ...
I1213 13:16:56.129043  455164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 13:16:56.129078  455164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217219
I1213 13:16:56.147847  455164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-217219/id_rsa Username:docker}
I1213 13:16:56.251375  455164 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4982/cgroup
W1213 13:16:56.260048  455164 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4982/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 13:16:56.260118  455164 ssh_runner.go:195] Run: ls
I1213 13:16:56.264194  455164 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1213 13:16:56.268465  455164 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1213 13:16:56.268517  455164 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1213 13:16:56.268693  455164 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:16:56.268713  455164 addons.go:70] Setting dashboard=true in profile "functional-217219"
I1213 13:16:56.268722  455164 addons.go:239] Setting addon dashboard=true in "functional-217219"
I1213 13:16:56.268770  455164 host.go:66] Checking if "functional-217219" exists ...
I1213 13:16:56.269231  455164 cli_runner.go:164] Run: docker container inspect functional-217219 --format={{.State.Status}}
I1213 13:16:56.289445  455164 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1213 13:16:56.290644  455164 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1213 13:16:56.291655  455164 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1213 13:16:56.291672  455164 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1213 13:16:56.291737  455164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217219
I1213 13:16:56.309795  455164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-217219/id_rsa Username:docker}
I1213 13:16:56.414641  455164 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1213 13:16:56.414672  455164 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1213 13:16:56.427954  455164 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1213 13:16:56.427976  455164 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1213 13:16:56.442134  455164 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1213 13:16:56.442160  455164 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1213 13:16:56.457211  455164 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1213 13:16:56.457251  455164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1213 13:16:56.470194  455164 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1213 13:16:56.470219  455164 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1213 13:16:56.484484  455164 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1213 13:16:56.484527  455164 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1213 13:16:56.497575  455164 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1213 13:16:56.497597  455164 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1213 13:16:56.510439  455164 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1213 13:16:56.510464  455164 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1213 13:16:56.523096  455164 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1213 13:16:56.523120  455164 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1213 13:16:56.535780  455164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1213 13:16:56.986499  455164 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-217219 addons enable metrics-server

                                                
                                                
I1213 13:16:56.987784  455164 addons.go:202] Writing out "functional-217219" config to set dashboard=true...
W1213 13:16:56.988064  455164 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1213 13:16:56.988930  455164 kapi.go:59] client config for functional-217219: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.key", CAFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1213 13:16:56.989519  455164 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1213 13:16:56.989535  455164 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1213 13:16:56.989540  455164 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1213 13:16:56.989547  455164 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1213 13:16:56.989554  455164 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1213 13:16:56.996908  455164 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  18a1abdc-681c-448d-b19e-094e855185d0 790 0 2025-12-13 13:16:56 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-13 13:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.107.110.43,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.107.110.43],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1213 13:16:56.997102  455164 out.go:285] * Launching proxy ...
* Launching proxy ...
I1213 13:16:56.997184  455164 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-217219 proxy --port 36195]
I1213 13:16:56.997470  455164 dashboard.go:159] Waiting for kubectl to output host:port ...
I1213 13:16:57.039547  455164 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1213 13:16:57.039628  455164 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1213 13:16:57.047018  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[41c36e08-fc62-4bfa-bb82-a7d37bae3735] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000772540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d88c0 TLS:<nil>}
I1213 13:16:57.047124  455164 retry.go:31] will retry after 135.29µs: Temporary Error: unexpected response code: 503
I1213 13:16:57.050192  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fa3bdef9-f0e3-41b5-8f8e-93c5697917c3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000535700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208dc0 TLS:<nil>}
I1213 13:16:57.050240  455164 retry.go:31] will retry after 152.452µs: Temporary Error: unexpected response code: 503
I1213 13:16:57.053376  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9690a297-aa60-4fd7-8ea9-aff8ca16bb96] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc001724100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d8a00 TLS:<nil>}
I1213 13:16:57.053441  455164 retry.go:31] will retry after 306.196µs: Temporary Error: unexpected response code: 503
I1213 13:16:57.056645  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a27dbc38-1611-4edb-97e9-1b24510e87a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000535800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012d040 TLS:<nil>}
I1213 13:16:57.056693  455164 retry.go:31] will retry after 452.66µs: Temporary Error: unexpected response code: 503
I1213 13:16:57.059990  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[70168c55-6adb-428c-b1d5-508dfe00c489] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc001724200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d8b40 TLS:<nil>}
I1213 13:16:57.060048  455164 retry.go:31] will retry after 602.366µs: Temporary Error: unexpected response code: 503
I1213 13:16:57.062924  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d3d3da2c-4723-4c6d-ad7a-3420ac1d4479] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000772680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012d2c0 TLS:<nil>}
I1213 13:16:57.062964  455164 retry.go:31] will retry after 711.711µs: Temporary Error: unexpected response code: 503
I1213 13:16:57.065731  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8741bd1e-43ef-437d-9d20-ac53bb59fc83] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000535900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1213 13:16:57.065778  455164 retry.go:31] will retry after 1.310319ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.069691  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[13411df5-60a0-45a1-bfe6-6cb6b631c11b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc001724300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d8dc0 TLS:<nil>}
I1213 13:16:57.069722  455164 retry.go:31] will retry after 897.099µs: Temporary Error: unexpected response code: 503
I1213 13:16:57.072493  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43dc04ae-383c-4663-a8ba-5e30ddacf564] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000772780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012d400 TLS:<nil>}
I1213 13:16:57.072524  455164 retry.go:31] will retry after 2.627057ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.077820  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ee7951ef-80f3-43c0-a98c-f78d70bab61b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000535a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209040 TLS:<nil>}
I1213 13:16:57.077875  455164 retry.go:31] will retry after 4.376086ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.085004  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[15475ffe-37fc-42e4-857c-eb9703c6f8f6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000772880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d8f00 TLS:<nil>}
I1213 13:16:57.085082  455164 retry.go:31] will retry after 4.680663ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.092347  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d61644db-bb54-43ec-89e9-2010d69a0d76] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc001724400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209180 TLS:<nil>}
I1213 13:16:57.092410  455164 retry.go:31] will retry after 11.517026ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.107033  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[133059bb-86cd-4e4a-ab0b-7e2b398f1df3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc001724500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012d540 TLS:<nil>}
I1213 13:16:57.107083  455164 retry.go:31] will retry after 14.67582ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.125243  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b947ce18-6eef-44a8-9b27-1f60717bc287] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc001724580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012d680 TLS:<nil>}
I1213 13:16:57.125412  455164 retry.go:31] will retry after 16.329107ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.145434  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[29daae71-0ee4-4223-9adb-7af8c3fdbdfc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000772ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012d7c0 TLS:<nil>}
I1213 13:16:57.145494  455164 retry.go:31] will retry after 39.108307ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.187395  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96bcbb22-d96b-468a-a638-da53fa479b75] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000535b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002092c0 TLS:<nil>}
I1213 13:16:57.187457  455164 retry.go:31] will retry after 24.691544ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.215731  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7fcd3d21-dc8f-4eb2-865b-a0746943ece8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000535c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d9040 TLS:<nil>}
I1213 13:16:57.215796  455164 retry.go:31] will retry after 70.327718ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.289973  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3acdea89-a265-4253-b88c-921da3aabe97] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000535cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d9180 TLS:<nil>}
I1213 13:16:57.290060  455164 retry.go:31] will retry after 77.590609ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.370876  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[da20581e-e31e-48db-9425-36cb44a1e48a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000773f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d92c0 TLS:<nil>}
I1213 13:16:57.370954  455164 retry.go:31] will retry after 182.516396ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.557071  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[da4d1055-c32a-4955-a49d-3a9c37fc49b6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc0016c8000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1213 13:16:57.557143  455164 retry.go:31] will retry after 122.020229ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.682147  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[97558799-95f3-4cb6-a55b-b0e73ff98d5e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc000535e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1213 13:16:57.682208  455164 retry.go:31] will retry after 256.157279ms: Temporary Error: unexpected response code: 503
I1213 13:16:57.941298  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f1cb40c8-48f0-465b-a3e6-4ee5403b0135] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:57 GMT]] Body:0xc0017246c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d9400 TLS:<nil>}
I1213 13:16:57.941390  455164 retry.go:31] will retry after 408.683579ms: Temporary Error: unexpected response code: 503
I1213 13:16:58.353893  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf4a73f5-7853-4630-b9a8-cc97edf48197] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:58 GMT]] Body:0xc000535f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012d900 TLS:<nil>}
I1213 13:16:58.353964  455164 retry.go:31] will retry after 807.323418ms: Temporary Error: unexpected response code: 503
I1213 13:16:59.164160  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2669df70-1a6e-41e4-a142-ee8bdaab0456] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:16:59 GMT]] Body:0xc001796040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d9540 TLS:<nil>}
I1213 13:16:59.164237  455164 retry.go:31] will retry after 1.405463013s: Temporary Error: unexpected response code: 503
I1213 13:17:00.572530  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f1f51bb-543e-41a1-8a46-6ae4458ce484] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:17:00 GMT]] Body:0xc001796100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d9680 TLS:<nil>}
I1213 13:17:00.572590  455164 retry.go:31] will retry after 1.390650533s: Temporary Error: unexpected response code: 503
I1213 13:17:01.967211  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[71f7cd68-feaf-40a5-b02e-0f7c41986fb8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:17:01 GMT]] Body:0xc0017247c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003d9b80 TLS:<nil>}
I1213 13:17:01.967298  455164 retry.go:31] will retry after 1.456901059s: Temporary Error: unexpected response code: 503
I1213 13:17:03.428473  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf0a93f6-a465-4b81-918e-c36a94084336] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:17:03 GMT]] Body:0xc001796240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012da40 TLS:<nil>}
I1213 13:17:03.428552  455164 retry.go:31] will retry after 2.874930377s: Temporary Error: unexpected response code: 503
I1213 13:17:06.307577  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2ce81b4-92db-486a-9b66-13c4e6299d7d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:17:06 GMT]] Body:0xc0016c81c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2000 TLS:<nil>}
I1213 13:17:06.307640  455164 retry.go:31] will retry after 3.775722794s: Temporary Error: unexpected response code: 503
I1213 13:17:10.088424  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f72293dc-f642-4ce6-b793-bc772fed936f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:17:10 GMT]] Body:0xc0016c8240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2140 TLS:<nil>}
I1213 13:17:10.088487  455164 retry.go:31] will retry after 12.07580614s: Temporary Error: unexpected response code: 503
I1213 13:17:22.167715  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9021df66-f78e-4cef-8854-338d51644735] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:17:22 GMT]] Body:0xc001796400 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2280 TLS:<nil>}
I1213 13:17:22.167792  455164 retry.go:31] will retry after 10.149982067s: Temporary Error: unexpected response code: 503
I1213 13:17:32.321798  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5cef86e1-075d-4643-86ce-3a8099fb12a5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:17:32 GMT]] Body:0xc0017248c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1213 13:17:32.321869  455164 retry.go:31] will retry after 18.787598706s: Temporary Error: unexpected response code: 503
I1213 13:17:51.112981  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[77af6ac9-e013-4b9b-8aa5-4ea411ac8378] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:17:51 GMT]] Body:0xc001796480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012db80 TLS:<nil>}
I1213 13:17:51.113080  455164 retry.go:31] will retry after 39.447950149s: Temporary Error: unexpected response code: 503
I1213 13:18:30.566270  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3a1e706e-44b5-4271-96de-de5a24757c2b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:18:30 GMT]] Body:0xc001724980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1213 13:18:30.566365  455164 retry.go:31] will retry after 33.50779696s: Temporary Error: unexpected response code: 503
I1213 13:19:04.079857  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[86b0e6cf-4020-46ec-ad54-0e032ba33349] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:19:04 GMT]] Body:0xc000842300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208280 TLS:<nil>}
I1213 13:19:04.079952  455164 retry.go:31] will retry after 30.442816828s: Temporary Error: unexpected response code: 503
I1213 13:19:34.528712  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6f1e6afe-e5ae-4706-8bd9-89d0f0d69120] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:19:34 GMT]] Body:0xc0016c8180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a23c0 TLS:<nil>}
I1213 13:19:34.528808  455164 retry.go:31] will retry after 37.801480747s: Temporary Error: unexpected response code: 503
I1213 13:20:12.333930  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[46f1735e-7384-4445-a25c-0f1f60cf35cd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:20:12 GMT]] Body:0xc000842400 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002083c0 TLS:<nil>}
I1213 13:20:12.334018  455164 retry.go:31] will retry after 48.950710436s: Temporary Error: unexpected response code: 503
I1213 13:21:01.288852  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fad038ec-b1c7-45c5-a069-31e371c7f510] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:21:01 GMT]] Body:0xc0016c80c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208640 TLS:<nil>}
I1213 13:21:01.288936  455164 retry.go:31] will retry after 39.872395631s: Temporary Error: unexpected response code: 503
I1213 13:21:41.165141  455164 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f21d5653-074a-4cb3-91f4-178e4b6ccbb8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 13:21:41 GMT]] Body:0xc0016c81c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2500 TLS:<nil>}
I1213 13:21:41.165227  455164 retry.go:31] will retry after 1m26.101008487s: Temporary Error: unexpected response code: 503
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-217219
helpers_test.go:244: (dbg) docker inspect functional-217219:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7",
	        "Created": "2025-12-13T13:14:53.821668965Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 441055,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:14:53.855091732Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7/hosts",
	        "LogPath": "/var/lib/docker/containers/267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7/267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7-json.log",
	        "Name": "/functional-217219",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-217219:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-217219",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7",
	                "LowerDir": "/var/lib/docker/overlay2/df603dc1966ac76b16e56445678bd546d94f91cfe84e66554c69ae21e54a2c10-init/diff:/var/lib/docker/overlay2/be5aa5e3490e76c6aea57ece480ce7168b4c08e9f5040b5571a6aeb87c809618/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df603dc1966ac76b16e56445678bd546d94f91cfe84e66554c69ae21e54a2c10/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df603dc1966ac76b16e56445678bd546d94f91cfe84e66554c69ae21e54a2c10/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df603dc1966ac76b16e56445678bd546d94f91cfe84e66554c69ae21e54a2c10/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-217219",
	                "Source": "/var/lib/docker/volumes/functional-217219/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-217219",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-217219",
	                "name.minikube.sigs.k8s.io": "functional-217219",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "43965f2ea64a9cb50009d0aa8b6b8a65fd0f879704954298865051911fadca06",
	            "SandboxKey": "/var/run/docker/netns/43965f2ea64a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-217219": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b9c053418420e36e497556ccfef59f888defff88b5571a4e55e97886727070a0",
	                    "EndpointID": "289f872058d1f84f283923dedd3455226c2880eb56ac6568bbdb2b9fa1d15af4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "36:b1:85:0c:4e:97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-217219",
	                        "267cac329215"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-217219 -n functional-217219
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-217219 logs -n 25: (1.258567778s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-217219 ssh -- ls -la /mount-9p                                                                          │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ ssh            │ functional-217219 ssh sudo umount -f /mount-9p                                                                     │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ ssh            │ functional-217219 ssh findmnt -T /mount1                                                                           │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ mount          │ -p functional-217219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821121883/001:/mount3 --alsologtostderr -v=1 │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ mount          │ -p functional-217219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821121883/001:/mount2 --alsologtostderr -v=1 │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ mount          │ -p functional-217219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821121883/001:/mount1 --alsologtostderr -v=1 │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ ssh            │ functional-217219 ssh findmnt -T /mount1                                                                           │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ ssh            │ functional-217219 ssh findmnt -T /mount2                                                                           │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ ssh            │ functional-217219 ssh findmnt -T /mount3                                                                           │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ mount          │ -p functional-217219 --kill=true                                                                                   │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ start          │ -p functional-217219 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ start          │ -p functional-217219 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd              │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ tunnel         │ functional-217219 tunnel --alsologtostderr                                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ tunnel         │ functional-217219 tunnel --alsologtostderr                                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ tunnel         │ functional-217219 tunnel --alsologtostderr                                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ update-context │ functional-217219 update-context --alsologtostderr -v=2                                                            │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ update-context │ functional-217219 update-context --alsologtostderr -v=2                                                            │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ update-context │ functional-217219 update-context --alsologtostderr -v=2                                                            │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ image          │ functional-217219 image ls --format short --alsologtostderr                                                        │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ ssh            │ functional-217219 ssh pgrep buildkitd                                                                              │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ image          │ functional-217219 image build -t localhost/my-image:functional-217219 testdata/build --alsologtostderr             │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ image          │ functional-217219 image ls                                                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ image          │ functional-217219 image ls --format yaml --alsologtostderr                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ image          │ functional-217219 image ls --format json --alsologtostderr                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ image          │ functional-217219 image ls --format table --alsologtostderr                                                        │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:17:05
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:17:05.007497  457510 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:17:05.007758  457510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:17:05.007769  457510 out.go:374] Setting ErrFile to fd 2...
	I1213 13:17:05.007773  457510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:17:05.008011  457510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:17:05.008458  457510 out.go:368] Setting JSON to false
	I1213 13:17:05.009523  457510 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7168,"bootTime":1765624657,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:17:05.009588  457510 start.go:143] virtualization: kvm guest
	I1213 13:17:05.011469  457510 out.go:179] * [functional-217219] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:17:05.012757  457510 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:17:05.012756  457510 notify.go:221] Checking for updates...
	I1213 13:17:05.015162  457510 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:17:05.016449  457510 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:17:05.017522  457510 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:17:05.018748  457510 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:17:05.019943  457510 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:17:05.021982  457510 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:17:05.022647  457510 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:17:05.046439  457510 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:17:05.046532  457510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:17:05.101979  457510 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:17:05.091749233 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:17:05.102134  457510 docker.go:319] overlay module found
	I1213 13:17:05.103833  457510 out.go:179] * Using the docker driver based on existing profile
	I1213 13:17:05.104927  457510 start.go:309] selected driver: docker
	I1213 13:17:05.104942  457510 start.go:927] validating driver "docker" against &{Name:functional-217219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-217219 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:17:05.105064  457510 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:17:05.105172  457510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:17:05.163383  457510 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:17:05.153555725 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:17:05.164082  457510 cni.go:84] Creating CNI manager for ""
	I1213 13:17:05.164176  457510 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:17:05.164221  457510 start.go:353] cluster config:
	{Name:functional-217219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-217219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:17:05.165909  457510 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	90b4c8cd362b8       a236f84b9d5d2       4 minutes ago       Running             nginx                     0                   759a8af2f7b92       nginx-svc                                   default
	f58996cb8c062       56cc512116c8f       4 minutes ago       Exited              mount-munger              0                   5ed6dc8126cfb       busybox-mount                               default
	52eecffb395f2       a236f84b9d5d2       5 minutes ago       Running             myfrontend                0                   4f5cfbe1eba74       sp-pod                                      default
	ca8ea5c9affa7       20d0be4ee4524       5 minutes ago       Running             mysql                     0                   8956ff5f0d592       mysql-6bcdcbc558-shvdj                      default
	03e94fe565e59       6e38f40d628db       5 minutes ago       Running             storage-provisioner       2                   221e76bdb3431       storage-provisioner                         kube-system
	3292b99a911f6       01e8bacf0f500       5 minutes ago       Running             kube-controller-manager   2                   2aa335bfa0c20       kube-controller-manager-functional-217219   kube-system
	e37e9c32a2f67       a5f569d49a979       5 minutes ago       Running             kube-apiserver            0                   81fcd73690e6f       kube-apiserver-functional-217219            kube-system
	c2fd836d9420c       a3e246e9556e9       5 minutes ago       Running             etcd                      1                   17b34e8ff6aef       etcd-functional-217219                      kube-system
	53c527df4ac1e       8aa150647e88a       6 minutes ago       Running             kube-proxy                1                   845d5103a5824       kube-proxy-tglrm                            kube-system
	7c27625128f5d       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   de0e2cb033a46       kindnet-nm7k8                               kube-system
	141b66734546c       01e8bacf0f500       6 minutes ago       Exited              kube-controller-manager   1                   2aa335bfa0c20       kube-controller-manager-functional-217219   kube-system
	901e1f8b3cfdd       88320b5498ff2       6 minutes ago       Running             kube-scheduler            1                   ec8916e88f6be       kube-scheduler-functional-217219            kube-system
	7a42c28392a1c       52546a367cc9e       6 minutes ago       Running             coredns                   1                   cea487497ed5f       coredns-66bc5c9577-tqrcj                    kube-system
	66ea250632d47       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       1                   221e76bdb3431       storage-provisioner                         kube-system
	9aab0d4e28067       52546a367cc9e       6 minutes ago       Exited              coredns                   0                   cea487497ed5f       coredns-66bc5c9577-tqrcj                    kube-system
	3786540370f77       409467f978b4a       6 minutes ago       Exited              kindnet-cni               0                   de0e2cb033a46       kindnet-nm7k8                               kube-system
	acf55abb51f47       8aa150647e88a       6 minutes ago       Exited              kube-proxy                0                   845d5103a5824       kube-proxy-tglrm                            kube-system
	8bcb10561ea3e       88320b5498ff2       6 minutes ago       Exited              kube-scheduler            0                   ec8916e88f6be       kube-scheduler-functional-217219            kube-system
	fe53cd1d10ae7       a3e246e9556e9       6 minutes ago       Exited              etcd                      0                   17b34e8ff6aef       etcd-functional-217219                      kube-system
	
	
	==> containerd <==
	Dec 13 13:21:48 functional-217219 containerd[3829]: time="2025-12-13T13:21:48.308483369Z" level=info msg="container event discarded" container=52eecffb395f2df670f9cdb61ffd1b500771216b1f76147f6002d59946b1e859 type=CONTAINER_STARTED_EVENT
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.344187855Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57d68ac2_1a27_4c6d_8832_be16dfc85bd8.slice/cri-containerd-ca8ea5c9affa746e43e018611564d0f9a5528165f9b6dba9c3cf41b2475d2b84.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.344286565Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57d68ac2_1a27_4c6d_8832_be16dfc85bd8.slice/cri-containerd-ca8ea5c9affa746e43e018611564d0f9a5528165f9b6dba9c3cf41b2475d2b84.scope/hugetlb.1GB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.345230946Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b795015_67c2_478d_9955_9144a43d1cf2.slice/cri-containerd-52eecffb395f2df670f9cdb61ffd1b500771216b1f76147f6002d59946b1e859.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.345379457Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b795015_67c2_478d_9955_9144a43d1cf2.slice/cri-containerd-52eecffb395f2df670f9cdb61ffd1b500771216b1f76147f6002d59946b1e859.scope/hugetlb.1GB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.346088866Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod397e7db7a35cd72332fd84ac3b8e8f69.slice/cri-containerd-901e1f8b3cfddaf2b2ab53c55acfe523315eeb4548c58d83a264ed9621304c3f.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.346203056Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod397e7db7a35cd72332fd84ac3b8e8f69.slice/cri-containerd-901e1f8b3cfddaf2b2ab53c55acfe523315eeb4548c58d83a264ed9621304c3f.scope/hugetlb.1GB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.346903706Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod876c1631_3b45_4953_b1db_1a9e410ab20f.slice/cri-containerd-7c27625128f5d2a8f9adf1abb326dc63f502c10156ee296ece00207990abaf9b.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.346992243Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod876c1631_3b45_4953_b1db_1a9e410ab20f.slice/cri-containerd-7c27625128f5d2a8f9adf1abb326dc63f502c10156ee296ece00207990abaf9b.scope/hugetlb.1GB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.347813215Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06a070ca_d0c6_4877_b7be_38b40019056b.slice/cri-containerd-03e94fe565e59ebabcabe4ea0f31fc2402044879d36cbe08ce5bda3c8e456271.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.347913282Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06a070ca_d0c6_4877_b7be_38b40019056b.slice/cri-containerd-03e94fe565e59ebabcabe4ea0f31fc2402044879d36cbe08ce5bda3c8e456271.scope/hugetlb.1GB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.348619246Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf284028e0e74523d8c08cb4bdf1c09a.slice/cri-containerd-c2fd836d9420cc3ef039cfc57643a4f2bcbaf0ccc829507a7bf07da20f24249d.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.348729571Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf284028e0e74523d8c08cb4bdf1c09a.slice/cri-containerd-c2fd836d9420cc3ef039cfc57643a4f2bcbaf0ccc829507a7bf07da20f24249d.scope/hugetlb.1GB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.349539908Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d24341b_a63c_4617_a687_613e5de69f74.slice/cri-containerd-90b4c8cd362b8b526829474a0d4f68911ea9b852c566fd4ed362eca6c1408385.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.349611451Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d24341b_a63c_4617_a687_613e5de69f74.slice/cri-containerd-90b4c8cd362b8b526829474a0d4f68911ea9b852c566fd4ed362eca6c1408385.scope/hugetlb.1GB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.350253480Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1e22895_fe18_4e7d_875a_0898730707d4.slice/cri-containerd-53c527df4ac1e73d8711bcf3f1c29a9683273f4bdb7e54383059bdcc69655e0c.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.350379953Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1e22895_fe18_4e7d_875a_0898730707d4.slice/cri-containerd-53c527df4ac1e73d8711bcf3f1c29a9683273f4bdb7e54383059bdcc69655e0c.scope/hugetlb.1GB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.351235245Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod881773ca_93a7_426c_ae18_d405fd712fd3.slice/cri-containerd-7a42c28392a1cbf0c1f1999cec72a3d5688910e89ae7b8ee17973990c8f62744.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.351374068Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod881773ca_93a7_426c_ae18_d405fd712fd3.slice/cri-containerd-7a42c28392a1cbf0c1f1999cec72a3d5688910e89ae7b8ee17973990c8f62744.scope/hugetlb.1GB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.352031317Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137ec6b4dadf4135f88b33cc1489700f.slice/cri-containerd-e37e9c32a2f67d25d6e71a7a104fa14146231e6cb30b2fe965fd8c4b5c570c99.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.352127659Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137ec6b4dadf4135f88b33cc1489700f.slice/cri-containerd-e37e9c32a2f67d25d6e71a7a104fa14146231e6cb30b2fe965fd8c4b5c570c99.scope/hugetlb.1GB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.352916708Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f6bb1a4283ad63a60305646fdaa013f.slice/cri-containerd-3292b99a911f698d25ea44543a5320a5583f88039b6d95e1136fa85f0f2d083a.scope/hugetlb.2MB.events\""
	Dec 13 13:21:52 functional-217219 containerd[3829]: time="2025-12-13T13:21:52.353019370Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f6bb1a4283ad63a60305646fdaa013f.slice/cri-containerd-3292b99a911f698d25ea44543a5320a5583f88039b6d95e1136fa85f0f2d083a.scope/hugetlb.1GB.events\""
	Dec 13 13:21:56 functional-217219 containerd[3829]: time="2025-12-13T13:21:56.455780596Z" level=info msg="container event discarded" container=5ed6dc8126cfb3e4991f165438d5a915744dcaf46c0a6b76640f72f1d3e5e069 type=CONTAINER_CREATED_EVENT
	Dec 13 13:21:56 functional-217219 containerd[3829]: time="2025-12-13T13:21:56.455865745Z" level=info msg="container event discarded" container=5ed6dc8126cfb3e4991f165438d5a915744dcaf46c0a6b76640f72f1d3e5e069 type=CONTAINER_STARTED_EVENT
	
	
	==> coredns [7a42c28392a1cbf0c1f1999cec72a3d5688910e89ae7b8ee17973990c8f62744] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41031 - 44421 "HINFO IN 2792457249630543027.2540707244043656700. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020764263s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [9aab0d4e28067e3e11fb0510f0e25209725738b92a0969ae0dc297b7f8ea68e3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45427 - 41474 "HINFO IN 5600990053674929246.5254148613491254788. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033376203s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-217219
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-217219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=functional-217219
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_15_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:15:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-217219
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:21:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:21:29 +0000   Sat, 13 Dec 2025 13:15:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:21:29 +0000   Sat, 13 Dec 2025 13:15:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:21:29 +0000   Sat, 13 Dec 2025 13:15:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:21:29 +0000   Sat, 13 Dec 2025 13:15:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-217219
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                e5e96994-f304-4728-9e5e-3e08ef7d5355
	  Boot ID:                    90a4a0ca-634d-4c7c-8727-6b2f644cc467
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-vgt4d                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  default                     hello-node-connect-7d85dfc575-hn58w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  default                     mysql-6bcdcbc558-shvdj                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m29s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 coredns-66bc5c9577-tqrcj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m44s
	  kube-system                 etcd-functional-217219                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m50s
	  kube-system                 kindnet-nm7k8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m44s
	  kube-system                 kube-apiserver-functional-217219              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-controller-manager-functional-217219     200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m50s
	  kube-system                 kube-proxy-tglrm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 kube-scheduler-functional-217219              100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m50s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-l9dp7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nmg94         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m43s                  kube-proxy       
	  Normal  Starting                 5m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m55s (x8 over 6m55s)  kubelet          Node functional-217219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s (x8 over 6m55s)  kubelet          Node functional-217219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s (x7 over 6m55s)  kubelet          Node functional-217219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m50s                  kubelet          Node functional-217219 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  6m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    6m50s                  kubelet          Node functional-217219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m50s                  kubelet          Node functional-217219 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m50s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m45s                  node-controller  Node functional-217219 event: Registered Node functional-217219 in Controller
	  Normal  NodeReady                6m33s                  kubelet          Node functional-217219 status is now: NodeReady
	  Normal  Starting                 5m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet          Node functional-217219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet          Node functional-217219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x7 over 5m56s)  kubelet          Node functional-217219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m50s                  node-controller  Node functional-217219 event: Registered Node functional-217219 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[ +15.550392] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 5b b2 4e f6 0c 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[Dec13 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	[  +0.000156] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 2b b1 e9 34 e9 08 06
	[  +9.601084] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[  +6.680640] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 7a 15 04 2e f9 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 26 9c 63 03 a8 a5 08 06
	[  +0.000500] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e bf e9 59 0c fc 08 06
	[ +14.220693] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 6b 48 e9 3e 65 08 06
	[  +0.000354] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[ +17.192216] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 ce b1 a0 1c 7b 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	
	
	==> etcd [c2fd836d9420cc3ef039cfc57643a4f2bcbaf0ccc829507a7bf07da20f24249d] <==
	{"level":"warn","ts":"2025-12-13T13:16:03.013254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.019781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.027271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.034446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.042010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.048580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.055830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.062650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.072426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.079943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.087661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.095422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.102338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.109260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.116274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.123456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.130959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.138420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.144921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.151380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.165206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.172073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.179742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.229630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50262","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:16:32.386613Z","caller":"traceutil/trace.go:172","msg":"trace[2074850217] transaction","detail":"{read_only:false; response_revision:661; number_of_response:1; }","duration":"101.66303ms","start":"2025-12-13T13:16:32.284924Z","end":"2025-12-13T13:16:32.386587Z","steps":["trace[2074850217] 'process raft request'  (duration: 101.514281ms)"],"step_count":1}
	
	
	==> etcd [fe53cd1d10ae7440c0ab4771c70cf05cdfe232b267bbef5ad5d6d4ba4380ea7d] <==
	{"level":"warn","ts":"2025-12-13T13:15:04.665435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.671935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.678586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.693071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.701524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.708250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.758717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:15:59.512732Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T13:15:59.512828Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-217219","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-13T13:15:59.512954Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T13:15:59.514568Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T13:15:59.514640Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T13:15:59.514698Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-12-13T13:15:59.514720Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T13:15:59.514769Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-13T13:15:59.514758Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-12-13T13:15:59.514779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T13:15:59.514747Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T13:15:59.514711Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T13:15:59.514861Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T13:15:59.514882Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T13:15:59.516712Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-13T13:15:59.516772Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T13:15:59.516813Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-13T13:15:59.516851Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-217219","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 13:21:57 up  2:04,  0 user,  load average: 0.24, 0.29, 0.72
	Linux functional-217219 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3786540370f774bebf4ad5bb115fd5bfc6e9e4a7c27d3b0315f9d7d75c1b8fbd] <==
	I1213 13:15:14.209800       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:15:14.210088       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1213 13:15:14.210254       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:15:14.210273       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:15:14.210306       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:15:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:15:14.410714       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:15:14.411117       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:15:14.411273       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:15:14.493128       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:15:14.793360       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:15:14.793646       1 metrics.go:72] Registering metrics
	I1213 13:15:14.793730       1 controller.go:711] "Syncing nftables rules"
	I1213 13:15:24.411537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:15:24.411594       1 main.go:301] handling current node
	I1213 13:15:34.414449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:15:34.414484       1 main.go:301] handling current node
	I1213 13:15:44.413624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:15:44.413700       1 main.go:301] handling current node
	
	
	==> kindnet [7c27625128f5d2a8f9adf1abb326dc63f502c10156ee296ece00207990abaf9b] <==
	I1213 13:19:50.099710       1 main.go:301] handling current node
	I1213 13:20:00.100395       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:20:00.100475       1 main.go:301] handling current node
	I1213 13:20:10.100141       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:20:10.100179       1 main.go:301] handling current node
	I1213 13:20:20.100363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:20:20.100397       1 main.go:301] handling current node
	I1213 13:20:30.100223       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:20:30.100272       1 main.go:301] handling current node
	I1213 13:20:40.099515       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:20:40.099561       1 main.go:301] handling current node
	I1213 13:20:50.099561       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:20:50.099592       1 main.go:301] handling current node
	I1213 13:21:00.099661       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:21:00.099706       1 main.go:301] handling current node
	I1213 13:21:10.099678       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:21:10.099747       1 main.go:301] handling current node
	I1213 13:21:20.106953       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:21:20.106992       1 main.go:301] handling current node
	I1213 13:21:30.099508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:21:30.099551       1 main.go:301] handling current node
	I1213 13:21:40.102022       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:21:40.102067       1 main.go:301] handling current node
	I1213 13:21:50.106510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:21:50.106548       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e37e9c32a2f67d25d6e71a7a104fa14146231e6cb30b2fe965fd8c4b5c570c99] <==
	I1213 13:16:03.717807       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:16:04.528567       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:16:04.590747       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1213 13:16:04.797596       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1213 13:16:04.798920       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:16:04.805252       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:16:05.328352       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 13:16:05.416055       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:16:05.466232       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:16:05.471371       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:16:14.181755       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:16:20.231872       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.217.175"}
	I1213 13:16:24.763172       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.47.42"}
	I1213 13:16:28.522818       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.30.133"}
	I1213 13:16:34.397382       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.208.35"}
	E1213 13:16:46.729704       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:48828: use of closed network connection
	E1213 13:16:47.100770       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:48844: use of closed network connection
	E1213 13:16:47.442037       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:48870: use of closed network connection
	E1213 13:16:49.303880       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:48902: use of closed network connection
	E1213 13:16:51.978129       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32828: use of closed network connection
	E1213 13:16:54.972638       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32862: use of closed network connection
	I1213 13:16:56.839910       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:16:56.952284       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.110.43"}
	I1213 13:16:56.978535       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.128.248"}
	I1213 13:17:05.766597       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.248.124"}
	
	
	==> kube-controller-manager [141b66734546c193cf86e7ea5259b3bb9b502e841b5861fa4c842c1a0ca3d361] <==
	I1213 13:15:50.399709       1 serving.go:386] Generated self-signed cert in-memory
	I1213 13:15:51.563626       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1213 13:15:51.563651       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:15:51.564981       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 13:15:51.564981       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 13:15:51.565311       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 13:15:51.565372       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 13:16:01.567279       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [3292b99a911f698d25ea44543a5320a5583f88039b6d95e1136fa85f0f2d083a] <==
	I1213 13:16:07.109385       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 13:16:07.109393       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 13:16:07.109509       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 13:16:07.109530       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 13:16:07.109530       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 13:16:07.109547       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 13:16:07.109679       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-217219"
	I1213 13:16:07.109755       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 13:16:07.109694       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 13:16:07.110835       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 13:16:07.112032       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 13:16:07.114909       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:16:07.114926       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 13:16:07.114932       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 13:16:07.116510       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:16:07.116534       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 13:16:07.117268       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 13:16:07.132495       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 13:16:07.136739       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1213 13:16:56.883638       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:16:56.887893       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:16:56.892107       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:16:56.892376       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:16:56.895059       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:16:56.900680       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [53c527df4ac1e73d8711bcf3f1c29a9683273f4bdb7e54383059bdcc69655e0c] <==
	I1213 13:15:49.866097       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1213 13:15:49.867246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-217219&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:15:51.355644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-217219&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:15:54.504131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-217219&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:15:58.444261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-217219&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1213 13:16:07.866251       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:16:07.866308       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 13:16:07.866465       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:16:07.901802       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:16:07.901867       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:16:07.908460       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:16:07.908868       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:16:07.908943       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:16:07.910297       1 config.go:200] "Starting service config controller"
	I1213 13:16:07.910343       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:16:07.910389       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:16:07.910412       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:16:07.910442       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:16:07.910448       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:16:07.910450       1 config.go:309] "Starting node config controller"
	I1213 13:16:07.910463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:16:07.910470       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:16:08.011167       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:16:08.011177       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:16:08.011224       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [acf55abb51f47f355bd2b622402a34abf7413a3b947d4525e847dc15063de2a1] <==
	I1213 13:15:13.528052       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:15:13.602892       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:15:13.703700       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:15:13.703754       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 13:15:13.703883       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:15:13.728759       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:15:13.728828       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:15:13.734856       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:15:13.735373       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:15:13.735750       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:15:13.737715       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:15:13.737732       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:15:13.737758       1 config.go:200] "Starting service config controller"
	I1213 13:15:13.737763       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:15:13.737790       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:15:13.737795       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:15:13.738102       1 config.go:309] "Starting node config controller"
	I1213 13:15:13.738117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:15:13.838210       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:15:13.838346       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:15:13.838353       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:15:13.838374       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8bcb10561ea3e90be79cae89691f165952a81eeda0ad0bca8ed1f950621aa6b3] <==
	E1213 13:15:05.149613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:15:05.149666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:15:05.149679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:15:05.149731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:15:05.149757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:15:06.002511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:15:06.029916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:15:06.093688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:15:06.115346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:15:06.127574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:15:06.138703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:15:06.149550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:15:06.157694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:15:06.166784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:15:06.210911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:15:06.295510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:15:06.315611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:15:06.357092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:15:06.364239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1213 13:15:09.346491       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:15:49.298831       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:15:49.298995       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 13:15:49.299026       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 13:15:49.299077       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 13:15:49.299107       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [901e1f8b3cfddaf2b2ab53c55acfe523315eeb4548c58d83a264ed9621304c3f] <==
	E1213 13:15:55.244840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:15:55.266600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:15:55.362652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:15:55.599950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:15:55.704056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:15:57.835008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 13:15:57.950960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:15:58.359263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:15:58.620909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:15:58.729237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:15:58.758827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:15:58.873695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:15:59.015629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:15:59.347791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:15:59.389718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:15:59.759879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:15:59.761206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:15:59.787798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:15:59.964036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:16:00.175041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:16:00.241756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:16:00.247280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:16:00.873708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:16:01.447060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1213 13:16:06.058584       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:20:37 functional-217219 kubelet[4824]: E1213 13:20:37.474357    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:20:39 functional-217219 kubelet[4824]: E1213 13:20:39.475568    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:20:46 functional-217219 kubelet[4824]: E1213 13:20:46.475392    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:20:47 functional-217219 kubelet[4824]: E1213 13:20:47.474943    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:20:49 functional-217219 kubelet[4824]: E1213 13:20:49.474758    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:20:54 functional-217219 kubelet[4824]: E1213 13:20:54.475796    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:20:57 functional-217219 kubelet[4824]: E1213 13:20:57.476097    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:21:00 functional-217219 kubelet[4824]: E1213 13:21:00.474263    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:21:00 functional-217219 kubelet[4824]: E1213 13:21:00.474519    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:21:06 functional-217219 kubelet[4824]: E1213 13:21:06.474877    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:21:10 functional-217219 kubelet[4824]: E1213 13:21:10.475265    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:21:11 functional-217219 kubelet[4824]: E1213 13:21:11.475058    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:21:15 functional-217219 kubelet[4824]: E1213 13:21:15.474644    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:21:21 functional-217219 kubelet[4824]: E1213 13:21:21.476098    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:21:24 functional-217219 kubelet[4824]: E1213 13:21:24.475835    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:21:25 functional-217219 kubelet[4824]: E1213 13:21:25.474347    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:21:30 functional-217219 kubelet[4824]: E1213 13:21:30.474920    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:21:32 functional-217219 kubelet[4824]: E1213 13:21:32.475960    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:21:38 functional-217219 kubelet[4824]: E1213 13:21:38.475786    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:21:39 functional-217219 kubelet[4824]: E1213 13:21:39.478344    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:21:45 functional-217219 kubelet[4824]: E1213 13:21:45.474885    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:21:45 functional-217219 kubelet[4824]: E1213 13:21:45.475417    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:21:50 functional-217219 kubelet[4824]: E1213 13:21:50.474466    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:21:52 functional-217219 kubelet[4824]: E1213 13:21:52.475038    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:21:57 functional-217219 kubelet[4824]: E1213 13:21:57.475662    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	
	
	==> storage-provisioner [03e94fe565e59ebabcabe4ea0f31fc2402044879d36cbe08ce5bda3c8e456271] <==
	W1213 13:21:31.593684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:33.596269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:33.600159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:35.603367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:35.608409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:37.612367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:37.616682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:39.620268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:39.625045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:41.628217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:41.632021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:43.634907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:43.639457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:45.642594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:45.647212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:47.650189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:47.653864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:49.656887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:49.661655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:51.664810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:51.668640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:53.671679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:53.675837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:55.679007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:21:55.685720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [66ea250632d474e2b73e9cababb75cec75f9b7e974c0e91b118e92f14eb7e2d2] <==
	I1213 13:15:49.674529       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 13:15:49.677556       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-217219 -n functional-217219
helpers_test.go:270: (dbg) Run:  kubectl --context functional-217219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-75c85bcc94-vgt4d hello-node-connect-7d85dfc575-hn58w dashboard-metrics-scraper-77bf4d6c4c-l9dp7 kubernetes-dashboard-855c9754f9-nmg94
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-217219 describe pod busybox-mount hello-node-75c85bcc94-vgt4d hello-node-connect-7d85dfc575-hn58w dashboard-metrics-scraper-77bf4d6c4c-l9dp7 kubernetes-dashboard-855c9754f9-nmg94
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-217219 describe pod busybox-mount hello-node-75c85bcc94-vgt4d hello-node-connect-7d85dfc575-hn58w dashboard-metrics-scraper-77bf4d6c4c-l9dp7 kubernetes-dashboard-855c9754f9-nmg94: exit status 1 (80.120089ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217219/192.168.49.2
	Start Time:       Sat, 13 Dec 2025 13:16:56 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://f58996cb8c0621c27dde4338ef1a414880329db85a89c5b0d6732999df31c24c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 13 Dec 2025 13:16:58 +0000
	      Finished:     Sat, 13 Dec 2025 13:16:58 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xj5mj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xj5mj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m2s  default-scheduler  Successfully assigned default/busybox-mount to functional-217219
	  Normal  Pulling    5m2s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.083s (2.083s including waiting). Image size: 2395207 bytes.
	  Normal  Created    5m    kubelet            Created container: mount-munger
	  Normal  Started    5m    kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-vgt4d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217219/192.168.49.2
	Start Time:       Sat, 13 Dec 2025 13:16:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxx2p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vxx2p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m34s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vgt4d to functional-217219
	  Warning  Failed     4m45s (x2 over 5m31s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  2m23s (x5 over 5m33s)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   2m21s (x5 over 5m31s)  kubelet  Error: ErrImagePull
	  Warning  Failed   2m21s (x3 over 5m12s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   19s (x20 over 5m30s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  8s (x21 over 5m30s)   kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-hn58w
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217219/192.168.49.2
	Start Time:       Sat, 13 Dec 2025 13:16:34 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvmfg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bvmfg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m24s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hn58w to functional-217219
	  Warning  Failed     3m47s (x4 over 5m15s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  2m26s (x5 over 5m24s)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   2m23s (x5 over 5m15s)  kubelet  Error: ErrImagePull
	  Warning  Failed   2m23s                  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  13s (x19 over 5m14s)  kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   13s (x19 over 5m14s)  kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-l9dp7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-nmg94" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-217219 describe pod busybox-mount hello-node-75c85bcc94-vgt4d hello-node-connect-7d85dfc575-hn58w dashboard-metrics-scraper-77bf4d6c4c-l9dp7 kubernetes-dashboard-855c9754f9-nmg94: exit status 1
E1213 13:22:17.852568  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:22:45.554604  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-217219 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-217219 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-hn58w" [ad01409e-8548-4297-8640-76b5030e77d5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-217219 -n functional-217219
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-13 13:26:34.726962673 +0000 UTC m=+1315.676563975
functional_test.go:1645: (dbg) Run:  kubectl --context functional-217219 describe po hello-node-connect-7d85dfc575-hn58w -n default
functional_test.go:1645: (dbg) kubectl --context functional-217219 describe po hello-node-connect-7d85dfc575-hn58w -n default:
Name:             hello-node-connect-7d85dfc575-hn58w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-217219/192.168.49.2
Start Time:       Sat, 13 Dec 2025 13:16:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvmfg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bvmfg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hn58w to functional-217219
Warning  Failed     8m23s (x4 over 9m51s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  7m2s (x5 over 10m)     kubelet  Pulling image "kicbase/echo-server"
Warning  Failed   6m59s (x5 over 9m51s)  kubelet  Error: ErrImagePull
Warning  Failed   6m59s                  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   4m49s (x19 over 9m50s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m24s (x21 over 9m50s)  kubelet  Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-217219 logs hello-node-connect-7d85dfc575-hn58w -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-217219 logs hello-node-connect-7d85dfc575-hn58w -n default: exit status 1 (61.218884ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hn58w" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-217219 logs hello-node-connect-7d85dfc575-hn58w -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-217219 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-hn58w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-217219/192.168.49.2
Start Time:       Sat, 13 Dec 2025 13:16:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvmfg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bvmfg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hn58w to functional-217219
Warning  Failed     8m23s (x4 over 9m51s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  7m2s (x5 over 10m)     kubelet  Pulling image "kicbase/echo-server"
Warning  Failed   6m59s (x5 over 9m51s)  kubelet  Error: ErrImagePull
Warning  Failed   6m59s                  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   4m49s (x19 over 9m50s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m24s (x21 over 9m50s)  kubelet  Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-217219 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-217219 logs -l app=hello-node-connect: exit status 1 (96.954649ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hn58w" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-217219 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-217219 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.208.35
IPs:                      10.110.208.35
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31109/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-217219
helpers_test.go:244: (dbg) docker inspect functional-217219:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7",
	        "Created": "2025-12-13T13:14:53.821668965Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 441055,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:14:53.855091732Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7/hosts",
	        "LogPath": "/var/lib/docker/containers/267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7/267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7-json.log",
	        "Name": "/functional-217219",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-217219:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-217219",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "267cac329215397201fcf0f3cc9c713e7adbbca9965a8e52f9e5c8ed24bdc0b7",
	                "LowerDir": "/var/lib/docker/overlay2/df603dc1966ac76b16e56445678bd546d94f91cfe84e66554c69ae21e54a2c10-init/diff:/var/lib/docker/overlay2/be5aa5e3490e76c6aea57ece480ce7168b4c08e9f5040b5571a6aeb87c809618/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df603dc1966ac76b16e56445678bd546d94f91cfe84e66554c69ae21e54a2c10/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df603dc1966ac76b16e56445678bd546d94f91cfe84e66554c69ae21e54a2c10/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df603dc1966ac76b16e56445678bd546d94f91cfe84e66554c69ae21e54a2c10/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-217219",
	                "Source": "/var/lib/docker/volumes/functional-217219/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-217219",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-217219",
	                "name.minikube.sigs.k8s.io": "functional-217219",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "43965f2ea64a9cb50009d0aa8b6b8a65fd0f879704954298865051911fadca06",
	            "SandboxKey": "/var/run/docker/netns/43965f2ea64a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-217219": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b9c053418420e36e497556ccfef59f888defff88b5571a4e55e97886727070a0",
	                    "EndpointID": "289f872058d1f84f283923dedd3455226c2880eb56ac6568bbdb2b9fa1d15af4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "36:b1:85:0c:4e:97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-217219",
	                        "267cac329215"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-217219 -n functional-217219
helpers_test.go:253: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-217219 logs -n 25: (1.244055471s)
helpers_test.go:261: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-217219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821121883/001:/mount1 --alsologtostderr -v=1 │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ ssh            │ functional-217219 ssh findmnt -T /mount1                                                                           │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ ssh            │ functional-217219 ssh findmnt -T /mount2                                                                           │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ ssh            │ functional-217219 ssh findmnt -T /mount3                                                                           │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ mount          │ -p functional-217219 --kill=true                                                                                   │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ start          │ -p functional-217219 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ start          │ -p functional-217219 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd              │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ tunnel         │ functional-217219 tunnel --alsologtostderr                                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ tunnel         │ functional-217219 tunnel --alsologtostderr                                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ tunnel         │ functional-217219 tunnel --alsologtostderr                                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ update-context │ functional-217219 update-context --alsologtostderr -v=2                                                            │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ update-context │ functional-217219 update-context --alsologtostderr -v=2                                                            │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ update-context │ functional-217219 update-context --alsologtostderr -v=2                                                            │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ image          │ functional-217219 image ls --format short --alsologtostderr                                                        │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ ssh            │ functional-217219 ssh pgrep buildkitd                                                                              │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │                     │
	│ image          │ functional-217219 image build -t localhost/my-image:functional-217219 testdata/build --alsologtostderr             │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ image          │ functional-217219 image ls                                                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ image          │ functional-217219 image ls --format yaml --alsologtostderr                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ image          │ functional-217219 image ls --format json --alsologtostderr                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ image          │ functional-217219 image ls --format table --alsologtostderr                                                        │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:17 UTC │ 13 Dec 25 13:17 UTC │
	│ service        │ functional-217219 service list                                                                                     │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:26 UTC │ 13 Dec 25 13:26 UTC │
	│ service        │ functional-217219 service list -o json                                                                             │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:26 UTC │ 13 Dec 25 13:26 UTC │
	│ service        │ functional-217219 service --namespace=default --https --url hello-node                                             │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:26 UTC │                     │
	│ service        │ functional-217219 service hello-node --url --format={{.IP}}                                                        │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:26 UTC │                     │
	│ service        │ functional-217219 service hello-node --url                                                                         │ functional-217219 │ jenkins │ v1.37.0 │ 13 Dec 25 13:26 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:17:05
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:17:05.007497  457510 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:17:05.007758  457510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:17:05.007769  457510 out.go:374] Setting ErrFile to fd 2...
	I1213 13:17:05.007773  457510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:17:05.008011  457510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:17:05.008458  457510 out.go:368] Setting JSON to false
	I1213 13:17:05.009523  457510 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7168,"bootTime":1765624657,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:17:05.009588  457510 start.go:143] virtualization: kvm guest
	I1213 13:17:05.011469  457510 out.go:179] * [functional-217219] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:17:05.012757  457510 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:17:05.012756  457510 notify.go:221] Checking for updates...
	I1213 13:17:05.015162  457510 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:17:05.016449  457510 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:17:05.017522  457510 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:17:05.018748  457510 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:17:05.019943  457510 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:17:05.021982  457510 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:17:05.022647  457510 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:17:05.046439  457510 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:17:05.046532  457510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:17:05.101979  457510 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:17:05.091749233 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:17:05.102134  457510 docker.go:319] overlay module found
	I1213 13:17:05.103833  457510 out.go:179] * Using the docker driver based on existing profile
	I1213 13:17:05.104927  457510 start.go:309] selected driver: docker
	I1213 13:17:05.104942  457510 start.go:927] validating driver "docker" against &{Name:functional-217219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-217219 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:17:05.105064  457510 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:17:05.105172  457510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:17:05.163383  457510 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:17:05.153555725 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:17:05.164082  457510 cni.go:84] Creating CNI manager for ""
	I1213 13:17:05.164176  457510 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:17:05.164221  457510 start.go:353] cluster config:
	{Name:functional-217219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-217219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:17:05.165909  457510 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	90b4c8cd362b8       a236f84b9d5d2       9 minutes ago       Running             nginx                     0                   759a8af2f7b92       nginx-svc                                   default
	f58996cb8c062       56cc512116c8f       9 minutes ago       Exited              mount-munger              0                   5ed6dc8126cfb       busybox-mount                               default
	52eecffb395f2       a236f84b9d5d2       9 minutes ago       Running             myfrontend                0                   4f5cfbe1eba74       sp-pod                                      default
	ca8ea5c9affa7       20d0be4ee4524       9 minutes ago       Running             mysql                     0                   8956ff5f0d592       mysql-6bcdcbc558-shvdj                      default
	03e94fe565e59       6e38f40d628db       10 minutes ago      Running             storage-provisioner       2                   221e76bdb3431       storage-provisioner                         kube-system
	3292b99a911f6       01e8bacf0f500       10 minutes ago      Running             kube-controller-manager   2                   2aa335bfa0c20       kube-controller-manager-functional-217219   kube-system
	e37e9c32a2f67       a5f569d49a979       10 minutes ago      Running             kube-apiserver            0                   81fcd73690e6f       kube-apiserver-functional-217219            kube-system
	c2fd836d9420c       a3e246e9556e9       10 minutes ago      Running             etcd                      1                   17b34e8ff6aef       etcd-functional-217219                      kube-system
	53c527df4ac1e       8aa150647e88a       10 minutes ago      Running             kube-proxy                1                   845d5103a5824       kube-proxy-tglrm                            kube-system
	7c27625128f5d       409467f978b4a       10 minutes ago      Running             kindnet-cni               1                   de0e2cb033a46       kindnet-nm7k8                               kube-system
	141b66734546c       01e8bacf0f500       10 minutes ago      Exited              kube-controller-manager   1                   2aa335bfa0c20       kube-controller-manager-functional-217219   kube-system
	901e1f8b3cfdd       88320b5498ff2       10 minutes ago      Running             kube-scheduler            1                   ec8916e88f6be       kube-scheduler-functional-217219            kube-system
	7a42c28392a1c       52546a367cc9e       10 minutes ago      Running             coredns                   1                   cea487497ed5f       coredns-66bc5c9577-tqrcj                    kube-system
	66ea250632d47       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       1                   221e76bdb3431       storage-provisioner                         kube-system
	9aab0d4e28067       52546a367cc9e       11 minutes ago      Exited              coredns                   0                   cea487497ed5f       coredns-66bc5c9577-tqrcj                    kube-system
	3786540370f77       409467f978b4a       11 minutes ago      Exited              kindnet-cni               0                   de0e2cb033a46       kindnet-nm7k8                               kube-system
	acf55abb51f47       8aa150647e88a       11 minutes ago      Exited              kube-proxy                0                   845d5103a5824       kube-proxy-tglrm                            kube-system
	8bcb10561ea3e       88320b5498ff2       11 minutes ago      Exited              kube-scheduler            0                   ec8916e88f6be       kube-scheduler-functional-217219            kube-system
	fe53cd1d10ae7       a3e246e9556e9       11 minutes ago      Exited              etcd                      0                   17b34e8ff6aef       etcd-functional-217219                      kube-system
	
	
	==> containerd <==
	Dec 13 13:26:22 functional-217219 containerd[3829]: time="2025-12-13T13:26:22.989952213Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137ec6b4dadf4135f88b33cc1489700f.slice/cri-containerd-e37e9c32a2f67d25d6e71a7a104fa14146231e6cb30b2fe965fd8c4b5c570c99.scope/hugetlb.1GB.events\""
	Dec 13 13:26:22 functional-217219 containerd[3829]: time="2025-12-13T13:26:22.990568696Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f6bb1a4283ad63a60305646fdaa013f.slice/cri-containerd-3292b99a911f698d25ea44543a5320a5583f88039b6d95e1136fa85f0f2d083a.scope/hugetlb.2MB.events\""
	Dec 13 13:26:22 functional-217219 containerd[3829]: time="2025-12-13T13:26:22.990644402Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f6bb1a4283ad63a60305646fdaa013f.slice/cri-containerd-3292b99a911f698d25ea44543a5320a5583f88039b6d95e1136fa85f0f2d083a.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.004560354Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod397e7db7a35cd72332fd84ac3b8e8f69.slice/cri-containerd-901e1f8b3cfddaf2b2ab53c55acfe523315eeb4548c58d83a264ed9621304c3f.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.004646644Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod397e7db7a35cd72332fd84ac3b8e8f69.slice/cri-containerd-901e1f8b3cfddaf2b2ab53c55acfe523315eeb4548c58d83a264ed9621304c3f.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.005292358Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod876c1631_3b45_4953_b1db_1a9e410ab20f.slice/cri-containerd-7c27625128f5d2a8f9adf1abb326dc63f502c10156ee296ece00207990abaf9b.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.005398592Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod876c1631_3b45_4953_b1db_1a9e410ab20f.slice/cri-containerd-7c27625128f5d2a8f9adf1abb326dc63f502c10156ee296ece00207990abaf9b.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.006120552Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06a070ca_d0c6_4877_b7be_38b40019056b.slice/cri-containerd-03e94fe565e59ebabcabe4ea0f31fc2402044879d36cbe08ce5bda3c8e456271.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.006229141Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06a070ca_d0c6_4877_b7be_38b40019056b.slice/cri-containerd-03e94fe565e59ebabcabe4ea0f31fc2402044879d36cbe08ce5bda3c8e456271.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.006928931Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf284028e0e74523d8c08cb4bdf1c09a.slice/cri-containerd-c2fd836d9420cc3ef039cfc57643a4f2bcbaf0ccc829507a7bf07da20f24249d.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.007017603Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf284028e0e74523d8c08cb4bdf1c09a.slice/cri-containerd-c2fd836d9420cc3ef039cfc57643a4f2bcbaf0ccc829507a7bf07da20f24249d.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.007687692Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d24341b_a63c_4617_a687_613e5de69f74.slice/cri-containerd-90b4c8cd362b8b526829474a0d4f68911ea9b852c566fd4ed362eca6c1408385.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.007757463Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d24341b_a63c_4617_a687_613e5de69f74.slice/cri-containerd-90b4c8cd362b8b526829474a0d4f68911ea9b852c566fd4ed362eca6c1408385.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.008447346Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1e22895_fe18_4e7d_875a_0898730707d4.slice/cri-containerd-53c527df4ac1e73d8711bcf3f1c29a9683273f4bdb7e54383059bdcc69655e0c.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.008526044Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1e22895_fe18_4e7d_875a_0898730707d4.slice/cri-containerd-53c527df4ac1e73d8711bcf3f1c29a9683273f4bdb7e54383059bdcc69655e0c.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.009208151Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod881773ca_93a7_426c_ae18_d405fd712fd3.slice/cri-containerd-7a42c28392a1cbf0c1f1999cec72a3d5688910e89ae7b8ee17973990c8f62744.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.009308969Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod881773ca_93a7_426c_ae18_d405fd712fd3.slice/cri-containerd-7a42c28392a1cbf0c1f1999cec72a3d5688910e89ae7b8ee17973990c8f62744.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.009999416Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137ec6b4dadf4135f88b33cc1489700f.slice/cri-containerd-e37e9c32a2f67d25d6e71a7a104fa14146231e6cb30b2fe965fd8c4b5c570c99.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.010098258Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137ec6b4dadf4135f88b33cc1489700f.slice/cri-containerd-e37e9c32a2f67d25d6e71a7a104fa14146231e6cb30b2fe965fd8c4b5c570c99.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.010746155Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f6bb1a4283ad63a60305646fdaa013f.slice/cri-containerd-3292b99a911f698d25ea44543a5320a5583f88039b6d95e1136fa85f0f2d083a.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.010818186Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f6bb1a4283ad63a60305646fdaa013f.slice/cri-containerd-3292b99a911f698d25ea44543a5320a5583f88039b6d95e1136fa85f0f2d083a.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.011510019Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57d68ac2_1a27_4c6d_8832_be16dfc85bd8.slice/cri-containerd-ca8ea5c9affa746e43e018611564d0f9a5528165f9b6dba9c3cf41b2475d2b84.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.011623449Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57d68ac2_1a27_4c6d_8832_be16dfc85bd8.slice/cri-containerd-ca8ea5c9affa746e43e018611564d0f9a5528165f9b6dba9c3cf41b2475d2b84.scope/hugetlb.1GB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.012442011Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b795015_67c2_478d_9955_9144a43d1cf2.slice/cri-containerd-52eecffb395f2df670f9cdb61ffd1b500771216b1f76147f6002d59946b1e859.scope/hugetlb.2MB.events\""
	Dec 13 13:26:33 functional-217219 containerd[3829]: time="2025-12-13T13:26:33.012547780Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b795015_67c2_478d_9955_9144a43d1cf2.slice/cri-containerd-52eecffb395f2df670f9cdb61ffd1b500771216b1f76147f6002d59946b1e859.scope/hugetlb.1GB.events\""
	
	
	==> coredns [7a42c28392a1cbf0c1f1999cec72a3d5688910e89ae7b8ee17973990c8f62744] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41031 - 44421 "HINFO IN 2792457249630543027.2540707244043656700. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020764263s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [9aab0d4e28067e3e11fb0510f0e25209725738b92a0969ae0dc297b7f8ea68e3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45427 - 41474 "HINFO IN 5600990053674929246.5254148613491254788. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033376203s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-217219
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-217219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=functional-217219
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_15_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:15:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-217219
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:26:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:26:35 +0000   Sat, 13 Dec 2025 13:15:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:26:35 +0000   Sat, 13 Dec 2025 13:15:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:26:35 +0000   Sat, 13 Dec 2025 13:15:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:26:35 +0000   Sat, 13 Dec 2025 13:15:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-217219
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                e5e96994-f304-4728-9e5e-3e08ef7d5355
	  Boot ID:                    90a4a0ca-634d-4c7c-8727-6b2f644cc467
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-vgt4d                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-hn58w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6bcdcbc558-shvdj                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m31s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 coredns-66bc5c9577-tqrcj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-217219                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-nm7k8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-217219              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-217219     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-tglrm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-217219              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-l9dp7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nmg94         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-217219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-217219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-217219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-217219 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-217219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-217219 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-217219 event: Registered Node functional-217219 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-217219 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-217219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-217219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-217219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-217219 event: Registered Node functional-217219 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[ +15.550392] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 5b b2 4e f6 0c 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[Dec13 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	[  +0.000156] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 2b b1 e9 34 e9 08 06
	[  +9.601084] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[  +6.680640] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 7a 15 04 2e f9 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 26 9c 63 03 a8 a5 08 06
	[  +0.000500] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e bf e9 59 0c fc 08 06
	[ +14.220693] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 6b 48 e9 3e 65 08 06
	[  +0.000354] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[ +17.192216] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 ce b1 a0 1c 7b 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	
	
	==> etcd [c2fd836d9420cc3ef039cfc57643a4f2bcbaf0ccc829507a7bf07da20f24249d] <==
	{"level":"warn","ts":"2025-12-13T13:16:03.034446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.042010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.048580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.055830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.062650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.072426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.079943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.087661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.095422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.102338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.109260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.116274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.123456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.130959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.138420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.144921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.151380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.165206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.172073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.179742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:16:03.229630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50262","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:16:32.386613Z","caller":"traceutil/trace.go:172","msg":"trace[2074850217] transaction","detail":"{read_only:false; response_revision:661; number_of_response:1; }","duration":"101.66303ms","start":"2025-12-13T13:16:32.284924Z","end":"2025-12-13T13:16:32.386587Z","steps":["trace[2074850217] 'process raft request'  (duration: 101.514281ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:26:02.729683Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1227}
	{"level":"info","ts":"2025-12-13T13:26:02.752587Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1227,"took":"22.524482ms","hash":4116881956,"current-db-size-bytes":3690496,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1785856,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-12-13T13:26:02.752644Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4116881956,"revision":1227,"compact-revision":-1}
	
	
	==> etcd [fe53cd1d10ae7440c0ab4771c70cf05cdfe232b267bbef5ad5d6d4ba4380ea7d] <==
	{"level":"warn","ts":"2025-12-13T13:15:04.665435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.671935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.678586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.693071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.701524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.708250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:15:04.758717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:15:59.512732Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T13:15:59.512828Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-217219","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-13T13:15:59.512954Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T13:15:59.514568Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T13:15:59.514640Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T13:15:59.514698Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-12-13T13:15:59.514720Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T13:15:59.514769Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-13T13:15:59.514758Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-12-13T13:15:59.514779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T13:15:59.514747Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T13:15:59.514711Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T13:15:59.514861Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T13:15:59.514882Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T13:15:59.516712Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-13T13:15:59.516772Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T13:15:59.516813Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-13T13:15:59.516851Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-217219","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 13:26:36 up  2:08,  0 user,  load average: 0.08, 0.14, 0.54
	Linux functional-217219 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3786540370f774bebf4ad5bb115fd5bfc6e9e4a7c27d3b0315f9d7d75c1b8fbd] <==
	I1213 13:15:14.209800       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:15:14.210088       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1213 13:15:14.210254       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:15:14.210273       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:15:14.210306       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:15:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:15:14.410714       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:15:14.411117       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:15:14.411273       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:15:14.493128       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:15:14.793360       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:15:14.793646       1 metrics.go:72] Registering metrics
	I1213 13:15:14.793730       1 controller.go:711] "Syncing nftables rules"
	I1213 13:15:24.411537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:15:24.411594       1 main.go:301] handling current node
	I1213 13:15:34.414449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:15:34.414484       1 main.go:301] handling current node
	I1213 13:15:44.413624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:15:44.413700       1 main.go:301] handling current node
	
	
	==> kindnet [7c27625128f5d2a8f9adf1abb326dc63f502c10156ee296ece00207990abaf9b] <==
	I1213 13:24:30.103653       1 main.go:301] handling current node
	I1213 13:24:40.100974       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:24:40.101007       1 main.go:301] handling current node
	I1213 13:24:50.106420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:24:50.106455       1 main.go:301] handling current node
	I1213 13:25:00.102738       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:25:00.102773       1 main.go:301] handling current node
	I1213 13:25:10.102449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:25:10.102487       1 main.go:301] handling current node
	I1213 13:25:20.104455       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:25:20.104490       1 main.go:301] handling current node
	I1213 13:25:30.103215       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:25:30.103250       1 main.go:301] handling current node
	I1213 13:25:40.099492       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:25:40.099541       1 main.go:301] handling current node
	I1213 13:25:50.106378       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:25:50.106413       1 main.go:301] handling current node
	I1213 13:26:00.103699       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:26:00.103737       1 main.go:301] handling current node
	I1213 13:26:10.101525       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:26:10.101566       1 main.go:301] handling current node
	I1213 13:26:20.100032       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:26:20.100087       1 main.go:301] handling current node
	I1213 13:26:30.103534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:26:30.103569       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e37e9c32a2f67d25d6e71a7a104fa14146231e6cb30b2fe965fd8c4b5c570c99] <==
	I1213 13:16:04.528567       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:16:04.590747       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1213 13:16:04.797596       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1213 13:16:04.798920       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:16:04.805252       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:16:05.328352       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 13:16:05.416055       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:16:05.466232       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:16:05.471371       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:16:14.181755       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:16:20.231872       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.217.175"}
	I1213 13:16:24.763172       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.47.42"}
	I1213 13:16:28.522818       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.30.133"}
	I1213 13:16:34.397382       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.208.35"}
	E1213 13:16:46.729704       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:48828: use of closed network connection
	E1213 13:16:47.100770       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:48844: use of closed network connection
	E1213 13:16:47.442037       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:48870: use of closed network connection
	E1213 13:16:49.303880       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:48902: use of closed network connection
	E1213 13:16:51.978129       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32828: use of closed network connection
	E1213 13:16:54.972638       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32862: use of closed network connection
	I1213 13:16:56.839910       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:16:56.952284       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.110.43"}
	I1213 13:16:56.978535       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.128.248"}
	I1213 13:17:05.766597       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.248.124"}
	I1213 13:26:03.622441       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [141b66734546c193cf86e7ea5259b3bb9b502e841b5861fa4c842c1a0ca3d361] <==
	I1213 13:15:50.399709       1 serving.go:386] Generated self-signed cert in-memory
	I1213 13:15:51.563626       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1213 13:15:51.563651       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:15:51.564981       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 13:15:51.564981       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 13:15:51.565311       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 13:15:51.565372       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 13:16:01.567279       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [3292b99a911f698d25ea44543a5320a5583f88039b6d95e1136fa85f0f2d083a] <==
	I1213 13:16:07.109385       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 13:16:07.109393       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 13:16:07.109509       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 13:16:07.109530       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 13:16:07.109530       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 13:16:07.109547       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 13:16:07.109679       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-217219"
	I1213 13:16:07.109755       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 13:16:07.109694       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 13:16:07.110835       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 13:16:07.112032       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 13:16:07.114909       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:16:07.114926       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 13:16:07.114932       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 13:16:07.116510       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:16:07.116534       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 13:16:07.117268       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 13:16:07.132495       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 13:16:07.136739       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1213 13:16:56.883638       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:16:56.887893       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:16:56.892107       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:16:56.892376       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:16:56.895059       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:16:56.900680       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [53c527df4ac1e73d8711bcf3f1c29a9683273f4bdb7e54383059bdcc69655e0c] <==
	I1213 13:15:49.866097       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1213 13:15:49.867246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-217219&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:15:51.355644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-217219&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:15:54.504131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-217219&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:15:58.444261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-217219&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1213 13:16:07.866251       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:16:07.866308       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 13:16:07.866465       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:16:07.901802       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:16:07.901867       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:16:07.908460       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:16:07.908868       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:16:07.908943       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:16:07.910297       1 config.go:200] "Starting service config controller"
	I1213 13:16:07.910343       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:16:07.910389       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:16:07.910412       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:16:07.910442       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:16:07.910448       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:16:07.910450       1 config.go:309] "Starting node config controller"
	I1213 13:16:07.910463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:16:07.910470       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:16:08.011167       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:16:08.011177       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:16:08.011224       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [acf55abb51f47f355bd2b622402a34abf7413a3b947d4525e847dc15063de2a1] <==
	I1213 13:15:13.528052       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:15:13.602892       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:15:13.703700       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:15:13.703754       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 13:15:13.703883       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:15:13.728759       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:15:13.728828       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:15:13.734856       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:15:13.735373       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:15:13.735750       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:15:13.737715       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:15:13.737732       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:15:13.737758       1 config.go:200] "Starting service config controller"
	I1213 13:15:13.737763       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:15:13.737790       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:15:13.737795       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:15:13.738102       1 config.go:309] "Starting node config controller"
	I1213 13:15:13.738117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:15:13.838210       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:15:13.838346       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:15:13.838353       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:15:13.838374       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8bcb10561ea3e90be79cae89691f165952a81eeda0ad0bca8ed1f950621aa6b3] <==
	E1213 13:15:05.149613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:15:05.149666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:15:05.149679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:15:05.149731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:15:05.149757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:15:06.002511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:15:06.029916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:15:06.093688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:15:06.115346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:15:06.127574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:15:06.138703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:15:06.149550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:15:06.157694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:15:06.166784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:15:06.210911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:15:06.295510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:15:06.315611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:15:06.357092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:15:06.364239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1213 13:15:09.346491       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:15:49.298831       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:15:49.298995       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 13:15:49.299026       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 13:15:49.299077       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 13:15:49.299107       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [901e1f8b3cfddaf2b2ab53c55acfe523315eeb4548c58d83a264ed9621304c3f] <==
	E1213 13:15:55.244840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:15:55.266600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:15:55.362652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:15:55.599950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:15:55.704056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:15:57.835008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 13:15:57.950960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:15:58.359263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:15:58.620909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:15:58.729237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:15:58.758827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:15:58.873695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:15:59.015629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:15:59.347791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:15:59.389718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:15:59.759879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:15:59.761206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:15:59.787798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:15:59.964036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:16:00.175041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:16:00.241756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:16:00.247280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:16:00.873708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:16:01.447060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1213 13:16:06.058584       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:25:13 functional-217219 kubelet[4824]: E1213 13:25:13.475288    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:25:24 functional-217219 kubelet[4824]: E1213 13:25:24.475291    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:25:24 functional-217219 kubelet[4824]: E1213 13:25:24.475291    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:25:24 functional-217219 kubelet[4824]: E1213 13:25:24.475804    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:25:27 functional-217219 kubelet[4824]: E1213 13:25:27.475985    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:25:35 functional-217219 kubelet[4824]: E1213 13:25:35.475489    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:25:36 functional-217219 kubelet[4824]: E1213 13:25:36.475184    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:25:36 functional-217219 kubelet[4824]: E1213 13:25:36.475235    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:25:41 functional-217219 kubelet[4824]: E1213 13:25:41.476342    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:25:48 functional-217219 kubelet[4824]: E1213 13:25:48.475132    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:25:49 functional-217219 kubelet[4824]: E1213 13:25:49.475424    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:25:50 functional-217219 kubelet[4824]: E1213 13:25:50.475815    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:25:55 functional-217219 kubelet[4824]: E1213 13:25:55.475785    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:26:01 functional-217219 kubelet[4824]: E1213 13:26:01.475774    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:26:02 functional-217219 kubelet[4824]: E1213 13:26:02.474497    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:26:02 functional-217219 kubelet[4824]: E1213 13:26:02.475116    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:26:08 functional-217219 kubelet[4824]: E1213 13:26:08.476106    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:26:13 functional-217219 kubelet[4824]: E1213 13:26:13.475211    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:26:15 functional-217219 kubelet[4824]: E1213 13:26:15.474746    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:26:16 functional-217219 kubelet[4824]: E1213 13:26:16.475191    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:26:21 functional-217219 kubelet[4824]: E1213 13:26:21.476126    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	Dec 13 13:26:26 functional-217219 kubelet[4824]: E1213 13:26:26.474613    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-vgt4d" podUID="e9548e0a-4c34-4074-b36e-ff28177b494e"
	Dec 13 13:26:28 functional-217219 kubelet[4824]: E1213 13:26:28.474942    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hn58w" podUID="ad01409e-8548-4297-8640-76b5030e77d5"
	Dec 13 13:26:31 functional-217219 kubelet[4824]: E1213 13:26:31.476082    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmg94" podUID="7922ddd4-c728-47aa-8eb9-2aeb85704036"
	Dec 13 13:26:34 functional-217219 kubelet[4824]: E1213 13:26:34.475981    4824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-l9dp7" podUID="e1da2c8e-860b-46b9-bf72-15730
e44b547"
	
	
	==> storage-provisioner [03e94fe565e59ebabcabe4ea0f31fc2402044879d36cbe08ce5bda3c8e456271] <==
	W1213 13:26:10.634502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:12.637613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:12.642494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:14.645837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:14.650086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:16.653097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:16.656945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:18.660359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:18.664168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:20.667248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:20.672229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:22.675269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:22.679312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:24.682036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:24.686930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:26.690568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:26.695052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:28.698388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:28.702144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:30.704820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:30.709593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:32.712683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:32.716642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:34.720885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:26:34.725179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [66ea250632d474e2b73e9cababb75cec75f9b7e974c0e91b118e92f14eb7e2d2] <==
	I1213 13:15:49.674529       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 13:15:49.677556       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-217219 -n functional-217219
helpers_test.go:270: (dbg) Run:  kubectl --context functional-217219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-75c85bcc94-vgt4d hello-node-connect-7d85dfc575-hn58w dashboard-metrics-scraper-77bf4d6c4c-l9dp7 kubernetes-dashboard-855c9754f9-nmg94
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-217219 describe pod busybox-mount hello-node-75c85bcc94-vgt4d hello-node-connect-7d85dfc575-hn58w dashboard-metrics-scraper-77bf4d6c4c-l9dp7 kubernetes-dashboard-855c9754f9-nmg94
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-217219 describe pod busybox-mount hello-node-75c85bcc94-vgt4d hello-node-connect-7d85dfc575-hn58w dashboard-metrics-scraper-77bf4d6c4c-l9dp7 kubernetes-dashboard-855c9754f9-nmg94: exit status 1 (78.827173ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217219/192.168.49.2
	Start Time:       Sat, 13 Dec 2025 13:16:56 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://f58996cb8c0621c27dde4338ef1a414880329db85a89c5b0d6732999df31c24c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 13 Dec 2025 13:16:58 +0000
	      Finished:     Sat, 13 Dec 2025 13:16:58 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xj5mj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xj5mj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m41s  default-scheduler  Successfully assigned default/busybox-mount to functional-217219
	  Normal  Pulling    9m41s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m39s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.083s (2.083s including waiting). Image size: 2395207 bytes.
	  Normal  Created    9m39s  kubelet            Created container: mount-munger
	  Normal  Started    9m39s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-vgt4d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217219/192.168.49.2
	Start Time:       Sat, 13 Dec 2025 13:16:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxx2p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vxx2p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vgt4d to functional-217219
	  Warning  Failed     9m24s (x2 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  7m2s (x5 over 10m)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   7m (x5 over 10m)    kubelet  Error: ErrImagePull
	  Warning  Failed   7m (x3 over 9m51s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   4m58s (x20 over 10m)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  11s (x41 over 10m)    kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-hn58w
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217219/192.168.49.2
	Start Time:       Sat, 13 Dec 2025 13:16:34 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvmfg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bvmfg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hn58w to functional-217219
	  Warning  Failed     8m26s (x4 over 9m54s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  7m5s (x5 over 10m)    kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   7m2s (x5 over 9m54s)  kubelet  Error: ErrImagePull
	  Warning  Failed   7m2s                  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   4m52s (x19 over 9m53s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m27s (x21 over 9m53s)  kubelet  Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-l9dp7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-nmg94" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-217219 describe pod busybox-mount hello-node-75c85bcc94-vgt4d hello-node-connect-7d85dfc575-hn58w dashboard-metrics-scraper-77bf4d6c4c-l9dp7 kubernetes-dashboard-855c9754f9-nmg94: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-217219 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-217219 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-vgt4d" [e9548e0a-4c34-4074-b36e-ff28177b494e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-217219 -n functional-217219
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-13 13:26:25.099898324 +0000 UTC m=+1306.049499627
functional_test.go:1460: (dbg) Run:  kubectl --context functional-217219 describe po hello-node-75c85bcc94-vgt4d -n default
functional_test.go:1460: (dbg) kubectl --context functional-217219 describe po hello-node-75c85bcc94-vgt4d -n default:
Name:             hello-node-75c85bcc94-vgt4d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-217219/192.168.49.2
Start Time:       Sat, 13 Dec 2025 13:16:24 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxx2p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vxx2p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vgt4d to functional-217219
Warning  Failed     9m12s (x2 over 9m58s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  6m50s (x5 over 10m)    kubelet  Pulling image "kicbase/echo-server"
Warning  Failed   6m48s (x5 over 9m58s)  kubelet  Error: ErrImagePull
Warning  Failed   6m48s (x3 over 9m39s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   4m46s (x20 over 9m57s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m35s (x21 over 9m57s)  kubelet  Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-217219 logs hello-node-75c85bcc94-vgt4d -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-217219 logs hello-node-75c85bcc94-vgt4d -n default: exit status 1 (69.34904ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-vgt4d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-217219 logs hello-node-75c85bcc94-vgt4d -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 service --namespace=default --https --url hello-node: exit status 115 (548.846679ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30691
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-217219 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 service hello-node --url --format={{.IP}}: exit status 115 (545.176595ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-217219 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 service hello-node --url: exit status 115 (546.12416ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30691
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-217219 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30691
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (4.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-017456 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-017456 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-017456 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-017456 --alsologtostderr -v=1] stderr:
I1213 13:28:39.231164  480602 out.go:360] Setting OutFile to fd 1 ...
I1213 13:28:39.231442  480602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:39.231453  480602 out.go:374] Setting ErrFile to fd 2...
I1213 13:28:39.231459  480602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:39.231766  480602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:28:39.232083  480602 mustload.go:66] Loading cluster: functional-017456
I1213 13:28:39.232613  480602 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:39.233212  480602 cli_runner.go:164] Run: docker container inspect functional-017456 --format={{.State.Status}}
I1213 13:28:39.257896  480602 host.go:66] Checking if "functional-017456" exists ...
I1213 13:28:39.258247  480602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 13:28:39.327110  480602 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:28:39.315199353 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1213 13:28:39.327283  480602 api_server.go:166] Checking apiserver status ...
I1213 13:28:39.329444  480602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 13:28:39.329560  480602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-017456
I1213 13:28:39.352213  480602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-017456/id_rsa Username:docker}
I1213 13:28:39.463623  480602 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4995/cgroup
W1213 13:28:39.473457  480602 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4995/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 13:28:39.473512  480602 ssh_runner.go:195] Run: ls
I1213 13:28:39.477609  480602 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1213 13:28:39.482835  480602 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1213 13:28:39.482896  480602 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1213 13:28:39.483053  480602 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:39.483072  480602 addons.go:70] Setting dashboard=true in profile "functional-017456"
I1213 13:28:39.483082  480602 addons.go:239] Setting addon dashboard=true in "functional-017456"
I1213 13:28:39.483115  480602 host.go:66] Checking if "functional-017456" exists ...
I1213 13:28:39.483541  480602 cli_runner.go:164] Run: docker container inspect functional-017456 --format={{.State.Status}}
I1213 13:28:39.509764  480602 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1213 13:28:39.511452  480602 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1213 13:28:39.512799  480602 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1213 13:28:39.512825  480602 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1213 13:28:39.512923  480602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-017456
I1213 13:28:39.533635  480602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-017456/id_rsa Username:docker}
I1213 13:28:39.643492  480602 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1213 13:28:39.643520  480602 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1213 13:28:39.658418  480602 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1213 13:28:39.658444  480602 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1213 13:28:39.672630  480602 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1213 13:28:39.672658  480602 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1213 13:28:39.687901  480602 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1213 13:28:39.687936  480602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1213 13:28:39.702632  480602 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1213 13:28:39.702654  480602 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1213 13:28:39.716854  480602 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1213 13:28:39.716887  480602 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1213 13:28:39.731367  480602 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1213 13:28:39.731393  480602 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1213 13:28:39.747417  480602 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1213 13:28:39.747444  480602 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1213 13:28:39.762541  480602 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1213 13:28:39.762585  480602 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1213 13:28:39.777291  480602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1213 13:28:40.341061  480602 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-017456 addons enable metrics-server

                                                
                                                
I1213 13:28:40.342294  480602 addons.go:202] Writing out "functional-017456" config to set dashboard=true...
W1213 13:28:40.342661  480602 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1213 13:28:40.343545  480602 kapi.go:59] client config for functional-017456: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.key", CAFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1213 13:28:40.344211  480602 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1213 13:28:40.344232  480602 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1213 13:28:40.344239  480602 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1213 13:28:40.344245  480602 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1213 13:28:40.344251  480602 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1213 13:28:40.354838  480602 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  933c5c4b-bec2-4818-a010-c4337c30cb70 827 0 2025-12-13 13:28:40 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-13 13:28:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.109.227.101,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.109.227.101],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1213 13:28:40.355013  480602 out.go:285] * Launching proxy ...
* Launching proxy ...
I1213 13:28:40.355092  480602 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-017456 proxy --port 36195]
I1213 13:28:40.355413  480602 dashboard.go:159] Waiting for kubectl to output host:port ...
I1213 13:28:40.413793  480602 out.go:203] 
W1213 13:28:40.415115  480602 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1213 13:28:40.415138  480602 out.go:285] * 
* 
W1213 13:28:40.420025  480602 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_56fc10ae89227dc10b3e9c0b0bbeff86322bc94d_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_56fc10ae89227dc10b3e9c0b0bbeff86322bc94d_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1213 13:28:40.421469  480602 out.go:203] 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-017456
helpers_test.go:244: (dbg) docker inspect functional-017456:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc0505e6f14d3828bdd7a5184a0701dce7002f6c86462c569d1ec334fb778c2a",
	        "Created": "2025-12-13T13:26:44.780881886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 465647,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:26:44.814953421Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/dc0505e6f14d3828bdd7a5184a0701dce7002f6c86462c569d1ec334fb778c2a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc0505e6f14d3828bdd7a5184a0701dce7002f6c86462c569d1ec334fb778c2a/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc0505e6f14d3828bdd7a5184a0701dce7002f6c86462c569d1ec334fb778c2a/hosts",
	        "LogPath": "/var/lib/docker/containers/dc0505e6f14d3828bdd7a5184a0701dce7002f6c86462c569d1ec334fb778c2a/dc0505e6f14d3828bdd7a5184a0701dce7002f6c86462c569d1ec334fb778c2a-json.log",
	        "Name": "/functional-017456",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-017456:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-017456",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc0505e6f14d3828bdd7a5184a0701dce7002f6c86462c569d1ec334fb778c2a",
	                "LowerDir": "/var/lib/docker/overlay2/ffe8e107b739ac86af9908c632a576b8cf1b31dba0c456a2e2716812003893c7-init/diff:/var/lib/docker/overlay2/be5aa5e3490e76c6aea57ece480ce7168b4c08e9f5040b5571a6aeb87c809618/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ffe8e107b739ac86af9908c632a576b8cf1b31dba0c456a2e2716812003893c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ffe8e107b739ac86af9908c632a576b8cf1b31dba0c456a2e2716812003893c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ffe8e107b739ac86af9908c632a576b8cf1b31dba0c456a2e2716812003893c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-017456",
	                "Source": "/var/lib/docker/volumes/functional-017456/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-017456",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-017456",
	                "name.minikube.sigs.k8s.io": "functional-017456",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "65dc6d89a682d9b25528e0cb670b40d0a7c1e7f12696b874c681a2c484f74b98",
	            "SandboxKey": "/var/run/docker/netns/65dc6d89a682",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-017456": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7959b8bc36341f70b2e9d5c9c373b6f058ef040634cf761e237991533386152e",
	                    "EndpointID": "8389afe1fd584109db48b826a8a6004b2869bf36c61ef5bc01c6d9512d5c0231",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "86:80:a2:29:cb:72",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-017456",
	                        "dc0505e6f14d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-017456 -n functional-017456
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-017456 logs -n 25: (1.755047906s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                 ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-017456 ssh sudo cat /usr/share/ca-certificates/4055312.pem                                                                                                 │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ ssh            │ functional-017456 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                              │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ image          │ functional-017456 image ls                                                                                                                                            │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ mount          │ -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1381143754/001:/mount-9p --alsologtostderr -v=1                                │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │                     │
	│ ssh            │ functional-017456 ssh findmnt -T /mount-9p | grep 9p                                                                                                                  │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │                     │
	│ image          │ functional-017456 image load --daemon kicbase/echo-server:functional-017456 --alsologtostderr                                                                         │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ ssh            │ functional-017456 ssh findmnt -T /mount-9p | grep 9p                                                                                                                  │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ image          │ functional-017456 image ls                                                                                                                                            │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ ssh            │ functional-017456 ssh -- ls -la /mount-9p                                                                                                                             │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ image          │ functional-017456 image save kicbase/echo-server:functional-017456 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ ssh            │ functional-017456 ssh cat /mount-9p/test-1765632515344500749                                                                                                          │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ image          │ functional-017456 image rm kicbase/echo-server:functional-017456 --alsologtostderr                                                                                    │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ image          │ functional-017456 image ls                                                                                                                                            │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ image          │ functional-017456 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ image          │ functional-017456 image ls                                                                                                                                            │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ image          │ functional-017456 image save --daemon kicbase/echo-server:functional-017456 --alsologtostderr                                                                         │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ start          │ -p functional-017456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                   │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │                     │
	│ start          │ -p functional-017456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                   │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │                     │
	│ start          │ -p functional-017456 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                             │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-017456 --alsologtostderr -v=1                                                                                                        │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │                     │
	│ update-context │ functional-017456 update-context --alsologtostderr -v=2                                                                                                               │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ update-context │ functional-017456 update-context --alsologtostderr -v=2                                                                                                               │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ update-context │ functional-017456 update-context --alsologtostderr -v=2                                                                                                               │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ image          │ functional-017456 image ls --format short --alsologtostderr                                                                                                           │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │ 13 Dec 25 13:28 UTC │
	│ image          │ functional-017456 image ls --format json --alsologtostderr                                                                                                            │ functional-017456 │ jenkins │ v1.37.0 │ 13 Dec 25 13:28 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:28:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:28:39.165995  480574 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:28:39.166274  480574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:28:39.166287  480574 out.go:374] Setting ErrFile to fd 2...
	I1213 13:28:39.166294  480574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:28:39.166584  480574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:28:39.167143  480574 out.go:368] Setting JSON to false
	I1213 13:28:39.168440  480574 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7862,"bootTime":1765624657,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:28:39.168514  480574 start.go:143] virtualization: kvm guest
	I1213 13:28:39.170850  480574 out.go:179] * [functional-017456] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:28:39.172367  480574 notify.go:221] Checking for updates...
	I1213 13:28:39.172426  480574 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:28:39.173895  480574 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:28:39.175286  480574 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:28:39.176720  480574 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:28:39.178169  480574 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:28:39.179618  480574 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:28:39.181502  480574 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 13:28:39.182382  480574 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:28:39.212389  480574 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:28:39.212491  480574 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:28:39.283959  480574 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:28:39.26903125 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:28:39.284128  480574 docker.go:319] overlay module found
	I1213 13:28:39.287431  480574 out.go:179] * Using the docker driver based on existing profile
	I1213 13:28:39.288743  480574 start.go:309] selected driver: docker
	I1213 13:28:39.288763  480574 start.go:927] validating driver "docker" against &{Name:functional-017456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-017456 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:28:39.288894  480574 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:28:39.289013  480574 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:28:39.359811  480574 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-13 13:28:39.349135865 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:28:39.360750  480574 cni.go:84] Creating CNI manager for ""
	I1213 13:28:39.360838  480574 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:28:39.360887  480574 start.go:353] cluster config:
	{Name:functional-017456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-017456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:28:39.363269  480574 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d3c6eb306c519       a236f84b9d5d2       8 seconds ago        Running             myfrontend                0                   e9ccf16a8926e       sp-pod                                      default
	23616144d41a9       9056ab77afb8e       16 seconds ago       Running             echo-server               0                   7025d2d483caa       hello-node-connect-9f67c86d4-4v4lk          default
	25d465ef4220f       a236f84b9d5d2       17 seconds ago       Running             nginx                     0                   d6d8db654fc64       nginx-svc                                   default
	9bb8d806375b7       9056ab77afb8e       19 seconds ago       Running             echo-server               0                   a288ed430bd8d       hello-node-5758569b79-7snlj                 default
	d1376ec8c0df7       aa9d02839d8de       46 seconds ago       Running             kube-apiserver            0                   119cbf4f733f4       kube-apiserver-functional-017456            kube-system
	2a2e108162281       45f3cc72d235f       46 seconds ago       Running             kube-controller-manager   2                   7f435d31f7cf4       kube-controller-manager-functional-017456   kube-system
	e2c2ff3a54cc1       a3e246e9556e9       57 seconds ago       Running             etcd                      1                   75657c5961093       etcd-functional-017456                      kube-system
	8150f16aab6e9       aa5e3ebc0dfed       About a minute ago   Running             coredns                   1                   6482b29b5756c       coredns-7d764666f9-sh4tp                    kube-system
	a6dc48ef4c88a       409467f978b4a       About a minute ago   Running             kindnet-cni               1                   c816a300fccd4       kindnet-wh999                               kube-system
	d9493adc1209e       8a4ded35a3eb1       About a minute ago   Running             kube-proxy                1                   36929c3ae5143       kube-proxy-7dkdt                            kube-system
	785fcee4eba63       45f3cc72d235f       About a minute ago   Exited              kube-controller-manager   1                   7f435d31f7cf4       kube-controller-manager-functional-017456   kube-system
	91719c159f122       7bb6219ddab95       About a minute ago   Running             kube-scheduler            1                   62ea974355594       kube-scheduler-functional-017456            kube-system
	3823191bbd1fb       6e38f40d628db       About a minute ago   Running             storage-provisioner       1                   cbb5893cf4c78       storage-provisioner                         kube-system
	06f868f5670fd       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   0                   6482b29b5756c       coredns-7d764666f9-sh4tp                    kube-system
	c02e216c9a8c7       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   cbb5893cf4c78       storage-provisioner                         kube-system
	bb450a12830a9       409467f978b4a       About a minute ago   Exited              kindnet-cni               0                   c816a300fccd4       kindnet-wh999                               kube-system
	3731eb9919a6e       8a4ded35a3eb1       About a minute ago   Exited              kube-proxy                0                   36929c3ae5143       kube-proxy-7dkdt                            kube-system
	6e54c42a3d486       7bb6219ddab95       About a minute ago   Exited              kube-scheduler            0                   62ea974355594       kube-scheduler-functional-017456            kube-system
	472b496cf2afd       a3e246e9556e9       About a minute ago   Exited              etcd                      0                   75657c5961093       etcd-functional-017456                      kube-system
	
	
	==> containerd <==
	Dec 13 13:28:36 functional-017456 containerd[3766]: time="2025-12-13T13:28:36.117358605Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-017456\""
	Dec 13 13:28:36 functional-017456 containerd[3766]: time="2025-12-13T13:28:36.118979606Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-017456\""
	Dec 13 13:28:36 functional-017456 containerd[3766]: time="2025-12-13T13:28:36.119871963Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Dec 13 13:28:36 functional-017456 containerd[3766]: time="2025-12-13T13:28:36.125444548Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-017456\" returns successfully"
	Dec 13 13:28:36 functional-017456 containerd[3766]: time="2025-12-13T13:28:36.605913118Z" level=info msg="No images store for sha256:f615d69a51308e7e40554d7713aeea7d0be874da54782bf1526d7b60d8f54b54"
	Dec 13 13:28:36 functional-017456 containerd[3766]: time="2025-12-13T13:28:36.607091503Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-017456\""
	Dec 13 13:28:36 functional-017456 containerd[3766]: time="2025-12-13T13:28:36.610781198Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 13 13:28:36 functional-017456 containerd[3766]: time="2025-12-13T13:28:36.611211929Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-017456\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 13 13:28:37 functional-017456 containerd[3766]: time="2025-12-13T13:28:37.513713103Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-017456\""
	Dec 13 13:28:37 functional-017456 containerd[3766]: time="2025-12-13T13:28:37.515747198Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-017456\""
	Dec 13 13:28:37 functional-017456 containerd[3766]: time="2025-12-13T13:28:37.516750967Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Dec 13 13:28:37 functional-017456 containerd[3766]: time="2025-12-13T13:28:37.521146347Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-017456\" returns successfully"
	Dec 13 13:28:37 functional-017456 containerd[3766]: time="2025-12-13T13:28:37.773189535Z" level=info msg="RunPodSandbox for name:\"busybox-mount\"  uid:\"4c7aad80-3395-4f9b-a96a-c629d22bdf94\"  namespace:\"default\""
	Dec 13 13:28:37 functional-017456 containerd[3766]: time="2025-12-13T13:28:37.810213200Z" level=info msg="connecting to shim f56457b547ce5399d05217c14ca12172957d3529d4193a4ac377a308c2759d09" address="unix:///run/containerd/s/d6e5f93492d9bc4b64bcab08eceb5b2e594f37d78a1312e3bd676cf313e1e0f7" namespace=k8s.io protocol=ttrpc version=3
	Dec 13 13:28:37 functional-017456 containerd[3766]: time="2025-12-13T13:28:37.889761518Z" level=info msg="RunPodSandbox for name:\"busybox-mount\"  uid:\"4c7aad80-3395-4f9b-a96a-c629d22bdf94\"  namespace:\"default\" returns sandbox id \"f56457b547ce5399d05217c14ca12172957d3529d4193a4ac377a308c2759d09\""
	Dec 13 13:28:38 functional-017456 containerd[3766]: time="2025-12-13T13:28:38.200268221Z" level=info msg="No images store for sha256:426b081b6b09b03e0f9cbb7645a43cd06622535271fff9ba78345e18243fdfd9"
	Dec 13 13:28:38 functional-017456 containerd[3766]: time="2025-12-13T13:28:38.201482671Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-017456\""
	Dec 13 13:28:38 functional-017456 containerd[3766]: time="2025-12-13T13:28:38.206661116Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 13 13:28:38 functional-017456 containerd[3766]: time="2025-12-13T13:28:38.207082414Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-017456\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 13 13:28:40 functional-017456 containerd[3766]: time="2025-12-13T13:28:40.600877354Z" level=info msg="RunPodSandbox for name:\"kubernetes-dashboard-b84665fb8-2d58w\"  uid:\"4ac2fe21-7a33-4f7b-bbd9-df9bb425382b\"  namespace:\"kubernetes-dashboard\""
	Dec 13 13:28:40 functional-017456 containerd[3766]: time="2025-12-13T13:28:40.611574849Z" level=info msg="RunPodSandbox for name:\"dashboard-metrics-scraper-5565989548-k88hv\"  uid:\"2066d111-93dd-4c83-8877-ab5dca71fb6c\"  namespace:\"kubernetes-dashboard\""
	Dec 13 13:28:40 functional-017456 containerd[3766]: time="2025-12-13T13:28:40.644516701Z" level=info msg="connecting to shim 1385b26c4b4cb0b4e25a2ce7fe99a51df19b254a35f15abbe3253dd6927075b1" address="unix:///run/containerd/s/ec9843f16f4bb2cb2f03a028d3c1d97d0a14a7111443fc0e4b18733188f24604" namespace=k8s.io protocol=ttrpc version=3
	Dec 13 13:28:40 functional-017456 containerd[3766]: time="2025-12-13T13:28:40.655362226Z" level=info msg="connecting to shim dd15946dd224ed742f4e43b0726d9af2ad45d711e02778ae3a38a30a98bb62a4" address="unix:///run/containerd/s/87776c697fa1a7c369963fdc1a8e458708c59e5d7a3ab26f3e79ad4d9e87118b" namespace=k8s.io protocol=ttrpc version=3
	Dec 13 13:28:40 functional-017456 containerd[3766]: time="2025-12-13T13:28:40.739155007Z" level=info msg="RunPodSandbox for name:\"kubernetes-dashboard-b84665fb8-2d58w\"  uid:\"4ac2fe21-7a33-4f7b-bbd9-df9bb425382b\"  namespace:\"kubernetes-dashboard\" returns sandbox id \"1385b26c4b4cb0b4e25a2ce7fe99a51df19b254a35f15abbe3253dd6927075b1\""
	Dec 13 13:28:40 functional-017456 containerd[3766]: time="2025-12-13T13:28:40.741458935Z" level=info msg="RunPodSandbox for name:\"dashboard-metrics-scraper-5565989548-k88hv\"  uid:\"2066d111-93dd-4c83-8877-ab5dca71fb6c\"  namespace:\"kubernetes-dashboard\" returns sandbox id \"dd15946dd224ed742f4e43b0726d9af2ad45d711e02778ae3a38a30a98bb62a4\""
	
	
	==> coredns [06f868f5670fdbaab39b3fa90e54fbcf89da1ecfc89d04d7d3bfec7ad236ade0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56819 - 45246 "HINFO IN 1028809934549783774.6788485636559861508. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026332246s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8150f16aab6e9e8dc32fde27c15d3e957e24eb45c71021bf90036242ce5ef62c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58878 - 11435 "HINFO IN 3311262360014318146.7828219062211494927. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020647058s
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               functional-017456
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-017456
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=functional-017456
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_26_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:26:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-017456
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:28:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:28:27 +0000   Sat, 13 Dec 2025 13:26:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:28:27 +0000   Sat, 13 Dec 2025 13:26:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:28:27 +0000   Sat, 13 Dec 2025 13:26:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:28:27 +0000   Sat, 13 Dec 2025 13:27:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-017456
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                3d35d82b-666f-4299-92ec-ed4d95dcc03d
	  Boot ID:                    90a4a0ca-634d-4c7c-8727-6b2f644cc467
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  default                     hello-node-5758569b79-7snlj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  default                     hello-node-connect-9f67c86d4-4v4lk            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  default                     mysql-7d7b65bc95-5ppq4                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-sh4tp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-functional-017456                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-wh999                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-functional-017456              250m (3%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-functional-017456     200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-7dkdt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-functional-017456              100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-k88hv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-2d58w          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  103s  node-controller  Node functional-017456 event: Registered Node functional-017456 in Controller
	  Normal  RegisteredNode  43s   node-controller  Node functional-017456 event: Registered Node functional-017456 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[ +15.550392] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 5b b2 4e f6 0c 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[Dec13 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	[  +0.000156] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 2b b1 e9 34 e9 08 06
	[  +9.601084] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[  +6.680640] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 7a 15 04 2e f9 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 26 9c 63 03 a8 a5 08 06
	[  +0.000500] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e bf e9 59 0c fc 08 06
	[ +14.220693] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 6b 48 e9 3e 65 08 06
	[  +0.000354] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[ +17.192216] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 ce b1 a0 1c 7b 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	
	
	==> etcd [472b496cf2afd8b009e95f52b8bcb2052991db4979f647db6de81587923bb747] <==
	{"level":"warn","ts":"2025-12-13T13:26:51.958800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:26:51.965021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:26:51.981511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:26:51.987921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:26:51.994985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:26:52.001467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:26:52.048854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39940","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:27:36.540230Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T13:27:36.540306Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-017456","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-13T13:27:36.540454Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T13:27:36.540589Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T13:27:43.541863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T13:27:43.541906Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T13:27:43.541968Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T13:27:43.541908Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-13T13:27:43.541970Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"error","ts":"2025-12-13T13:27:43.541980Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T13:27:43.541987Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T13:27:43.541997Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T13:27:43.542014Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-13T13:27:43.542027Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T13:27:43.545501Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-13T13:27:43.545583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T13:27:43.545612Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-13T13:27:43.545621Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-017456","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [e2c2ff3a54cc154356a037c420138ad3b51d5c36e779ca660a02278d163cb533] <==
	{"level":"warn","ts":"2025-12-13T13:27:55.611693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.620393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.626972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.634504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.640852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.647293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.653868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.660152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.667524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.676370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.682711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.688929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.696633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.703612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.709594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.716147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.722952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.728918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.735958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.742975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.761846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.767911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.774182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.781565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:27:55.828661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54326","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:28:42 up  2:11,  0 user,  load average: 1.24, 0.43, 0.59
	Linux functional-017456 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a6dc48ef4c88a9d02efa88ef39952b6eb541c68e4c37a2fe000aff16b8a628a7] <==
	podIP = 192.168.49.2
	I1213 13:27:37.127524       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:27:37.127540       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:27:37.127560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:27:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:27:37.326657       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:27:37.326763       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:27:37.326775       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:27:37.397995       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:27:37.727237       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:27:37.727262       1 metrics.go:72] Registering metrics
	I1213 13:27:37.727347       1 controller.go:711] "Syncing nftables rules"
	I1213 13:27:47.327226       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:27:47.327311       1 main.go:301] handling current node
	E1213 13:27:56.221472       1 reflector.go:200] "Failed to watch" err="namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1213 13:27:57.327586       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:27:57.327624       1 main.go:301] handling current node
	I1213 13:28:07.327581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:28:07.327633       1 main.go:301] handling current node
	I1213 13:28:17.326852       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:28:17.326907       1 main.go:301] handling current node
	I1213 13:28:27.327238       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:28:27.327296       1 main.go:301] handling current node
	I1213 13:28:37.329436       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:28:37.329474       1 main.go:301] handling current node
	
	
	==> kindnet [bb450a12830a9e9ad55edc88920014a95641e2e1e5f82142e2027df91776511c] <==
	I1213 13:27:01.499528       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:27:01.499831       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1213 13:27:01.499989       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:27:01.500005       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:27:01.500029       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:27:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:27:01.798094       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:27:01.798240       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:27:01.798284       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:27:01.798650       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:27:02.198558       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:27:02.198589       1 metrics.go:72] Registering metrics
	I1213 13:27:02.198646       1 controller.go:711] "Syncing nftables rules"
	I1213 13:27:11.699226       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:27:11.699274       1 main.go:301] handling current node
	I1213 13:27:21.698923       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:27:21.698965       1 main.go:301] handling current node
	I1213 13:27:31.698262       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:27:31.698338       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d1376ec8c0df7eca7f60dc5833c61c554c4a0752f85b7effda16ef6182196b28] <==
	I1213 13:27:56.277828       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:27:56.277515       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 13:27:56.277848       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 13:27:56.282283       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 13:27:56.297605       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:27:56.499455       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:27:57.180994       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1213 13:27:57.385592       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1213 13:27:57.386734       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:27:57.390889       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:27:57.875353       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 13:27:57.965032       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:27:58.013357       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:27:58.018692       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:28:15.312388       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.247.96"}
	I1213 13:28:19.698352       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:28:19.807602       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.247.68"}
	I1213 13:28:20.883161       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.129.128"}
	I1213 13:28:21.744929       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.74.199"}
	E1213 13:28:31.774612       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57762: use of closed network connection
	I1213 13:28:32.899184       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.164.112"}
	E1213 13:28:38.944929       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57876: use of closed network connection
	I1213 13:28:40.178045       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:28:40.317021       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.227.101"}
	I1213 13:28:40.331082       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.130.5"}
	
	
	==> kube-controller-manager [2a2e1081622818a9ddb263a919e2da58acf218aa1befa4ce910b723106d35214] <==
	I1213 13:27:59.404467       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.404521       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.404578       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.404616       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.404693       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.404711       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.404741       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.404767       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.404777       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.405701       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.405719       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.405709       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.405721       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.411876       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:27:59.412832       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.503827       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:59.503850       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 13:27:59.503855       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 13:27:59.512408       1 shared_informer.go:377] "Caches are synced"
	E1213 13:28:40.235943       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:28:40.240055       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:28:40.248308       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:28:40.248427       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:28:40.254074       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 13:28:40.259548       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [785fcee4eba63956cf575ff7c4d9aedfedc7f8d241c109b10e931b47b899e587] <==
	I1213 13:27:37.211942       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 13:27:46.626636       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1213 13:27:46.626789       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1213 13:27:46.626851       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1213 13:27:46.626944       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1213 13:27:46.626993       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1213 13:27:46.627037       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="resourceclaimtemplates.resource.k8s.io"
	I1213 13:27:46.627090       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1213 13:27:46.627161       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1213 13:27:46.627272       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1213 13:27:46.627304       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1213 13:27:46.627365       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1213 13:27:46.627420       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1213 13:27:46.628027       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1213 13:27:46.628120       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1213 13:27:46.628165       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1213 13:27:46.628200       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1213 13:27:46.628236       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1213 13:27:46.628277       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1213 13:27:46.628310       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1213 13:27:46.628361       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1213 13:27:46.628401       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1213 13:27:46.985297       1 range_allocator.go:113] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	E1213 13:27:46.985764       1 controllermanager.go:575] "Error initializing a controller" err="failed to create Kubernetes client for \"resource-claim-controller\": Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/resource-claim-controller\": dial tcp 192.168.49.2:8441: connect: connection refused" controller="resourceclaim-controller"
	E1213 13:27:46.985785       1 controllermanager.go:257] "Error building controllers" err="failed to create Kubernetes client for \"resource-claim-controller\": Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/resource-claim-controller\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [3731eb9919a6eb6f36f5be0f026a77cb045b002d08d384b6c9849583a7f02b06] <==
	I1213 13:27:00.982623       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:27:01.045036       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:27:01.145979       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:01.146026       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 13:27:01.146141       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:27:01.167788       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:27:01.167841       1 server_linux.go:136] "Using iptables Proxier"
	I1213 13:27:01.173101       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:27:01.173482       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 13:27:01.173522       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:27:01.174820       1 config.go:200] "Starting service config controller"
	I1213 13:27:01.174854       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:27:01.174826       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:27:01.174868       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:27:01.174868       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:27:01.174878       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:27:01.174952       1 config.go:309] "Starting node config controller"
	I1213 13:27:01.174960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:27:01.174966       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:27:01.275403       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:27:01.275453       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:27:01.275418       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d9493adc1209e352ba77fde74485dd4c88066d48067efa0ed0cc28979b62a855] <==
	I1213 13:27:36.966748       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:27:37.033284       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:27:47.033988       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:47.034052       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 13:27:47.034154       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:27:47.055662       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:27:47.055727       1 server_linux.go:136] "Using iptables Proxier"
	I1213 13:27:47.061266       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:27:47.061669       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 13:27:47.061697       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:27:47.063213       1 config.go:200] "Starting service config controller"
	I1213 13:27:47.063226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:27:47.063377       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:27:47.063410       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:27:47.063436       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:27:47.063473       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:27:47.063510       1 config.go:309] "Starting node config controller"
	I1213 13:27:47.063516       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:27:47.063522       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1213 13:27:47.064047       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1213 13:27:56.221336       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]"
	I1213 13:27:56.264202       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:27:57.764508       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:28:02.163310       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6e54c42a3d486d87ab39de3ccc570c2cc96b93ebfedb2c2c7ee058bc0c430fe7] <==
	E1213 13:26:53.339832       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 13:26:53.340811       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1213 13:26:53.402259       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1213 13:26:53.403188       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1213 13:26:53.422298       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 13:26:53.423182       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1213 13:26:53.490851       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1213 13:26:53.491722       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1213 13:26:53.502680       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 13:26:53.503523       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1213 13:26:53.528719       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 13:26:53.529724       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1213 13:26:53.548777       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1213 13:26:53.549790       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1213 13:26:53.551693       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 13:26:53.552454       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1213 13:26:53.621807       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1213 13:26:53.622689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I1213 13:26:54.054511       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:36.400947       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 13:27:36.401148       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:27:36.401158       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 13:27:36.401378       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 13:27:36.401391       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 13:27:36.401413       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [91719c159f122f02c13d86269f7e4937ca231560868c7f1f9c09760b58005110] <==
	I1213 13:27:37.104708       1 serving.go:386] Generated self-signed cert in-memory
	I1213 13:27:45.196310       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 13:27:45.196348       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:27:45.200585       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 13:27:45.200599       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:27:45.200618       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:27:45.200625       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:27:45.200630       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 13:27:45.200651       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:27:45.200827       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 13:27:45.200980       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 13:27:45.401760       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:45.401861       1 shared_informer.go:377] "Caches are synced"
	I1213 13:27:45.401943       1 shared_informer.go:377] "Caches are synced"
	E1213 13:27:56.206897       1 reflector.go:204] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	
	
	==> kubelet <==
	Dec 13 13:28:25 functional-017456 kubelet[4843]: I1213 13:28:25.658438    4843 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/nginx-svc" podStartSLOduration=2.283339831 podStartE2EDuration="5.658412704s" podCreationTimestamp="2025-12-13 13:28:20 +0000 UTC" firstStartedPulling="2025-12-13 13:28:21.30217822 +0000 UTC m=+26.888683381" lastFinishedPulling="2025-12-13 13:28:24.67725104 +0000 UTC m=+30.263756254" observedRunningTime="2025-12-13 13:28:25.648288534 +0000 UTC m=+31.234793717" watchObservedRunningTime="2025-12-13 13:28:25.658412704 +0000 UTC m=+31.244917886"
	Dec 13 13:28:25 functional-017456 kubelet[4843]: I1213 13:28:25.784230    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f\" (UniqueName: \"kubernetes.io/host-path/6142876a-2aec-4b91-8afd-7b2cc587252b-pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f\") pod \"sp-pod\" (UID: \"6142876a-2aec-4b91-8afd-7b2cc587252b\") " pod="default/sp-pod"
	Dec 13 13:28:25 functional-017456 kubelet[4843]: I1213 13:28:25.784289    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nmsg\" (UniqueName: \"kubernetes.io/projected/6142876a-2aec-4b91-8afd-7b2cc587252b-kube-api-access-6nmsg\") pod \"sp-pod\" (UID: \"6142876a-2aec-4b91-8afd-7b2cc587252b\") " pod="default/sp-pod"
	Dec 13 13:28:31 functional-017456 kubelet[4843]: I1213 13:28:31.841118    4843 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=6.841095532 podStartE2EDuration="6.841095532s" podCreationTimestamp="2025-12-13 13:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:28:26.641239191 +0000 UTC m=+32.227744373" watchObservedRunningTime="2025-12-13 13:28:31.841095532 +0000 UTC m=+37.427600715"
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.224089    4843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/6142876a-2aec-4b91-8afd-7b2cc587252b-kube-api-access-6nmsg\" (UniqueName: \"kubernetes.io/projected/6142876a-2aec-4b91-8afd-7b2cc587252b-kube-api-access-6nmsg\") pod \"6142876a-2aec-4b91-8afd-7b2cc587252b\" (UID: \"6142876a-2aec-4b91-8afd-7b2cc587252b\") "
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.224143    4843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6142876a-2aec-4b91-8afd-7b2cc587252b-pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f\" (UniqueName: \"kubernetes.io/host-path/6142876a-2aec-4b91-8afd-7b2cc587252b-pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f\") pod \"6142876a-2aec-4b91-8afd-7b2cc587252b\" (UID: \"6142876a-2aec-4b91-8afd-7b2cc587252b\") "
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.224252    4843 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6142876a-2aec-4b91-8afd-7b2cc587252b-pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f" pod "6142876a-2aec-4b91-8afd-7b2cc587252b" (UID: "6142876a-2aec-4b91-8afd-7b2cc587252b"). InnerVolumeSpecName "pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.226921    4843 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6142876a-2aec-4b91-8afd-7b2cc587252b-kube-api-access-6nmsg" pod "6142876a-2aec-4b91-8afd-7b2cc587252b" (UID: "6142876a-2aec-4b91-8afd-7b2cc587252b"). InnerVolumeSpecName "kube-api-access-6nmsg". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.325243    4843 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6nmsg\" (UniqueName: \"kubernetes.io/projected/6142876a-2aec-4b91-8afd-7b2cc587252b-kube-api-access-6nmsg\") on node \"functional-017456\" DevicePath \"\""
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.325291    4843 reconciler_common.go:299] "Volume detached for volume \"pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f\" (UniqueName: \"kubernetes.io/host-path/6142876a-2aec-4b91-8afd-7b2cc587252b-pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f\") on node \"functional-017456\" DevicePath \"\""
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.651354    4843 scope.go:122] "RemoveContainer" containerID="839d65d585b864afb6a2e331770049fcdea2f07aee183a7d9e7aa9244e37f4a7"
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.671616    4843 scope.go:122] "RemoveContainer" containerID="839d65d585b864afb6a2e331770049fcdea2f07aee183a7d9e7aa9244e37f4a7"
	Dec 13 13:28:32 functional-017456 kubelet[4843]: E1213 13:28:32.672267    4843 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"839d65d585b864afb6a2e331770049fcdea2f07aee183a7d9e7aa9244e37f4a7\": not found" containerID="839d65d585b864afb6a2e331770049fcdea2f07aee183a7d9e7aa9244e37f4a7"
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.672393    4843 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"839d65d585b864afb6a2e331770049fcdea2f07aee183a7d9e7aa9244e37f4a7"} err="failed to get container status \"839d65d585b864afb6a2e331770049fcdea2f07aee183a7d9e7aa9244e37f4a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"839d65d585b864afb6a2e331770049fcdea2f07aee183a7d9e7aa9244e37f4a7\": not found"
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.927550    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f\" (UniqueName: \"kubernetes.io/host-path/22be6582-22f1-4387-9c48-5c21f5449d4d-pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f\") pod \"sp-pod\" (UID: \"22be6582-22f1-4387-9c48-5c21f5449d4d\") " pod="default/sp-pod"
	Dec 13 13:28:32 functional-017456 kubelet[4843]: I1213 13:28:32.927602    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5s8c\" (UniqueName: \"kubernetes.io/projected/22be6582-22f1-4387-9c48-5c21f5449d4d-kube-api-access-r5s8c\") pod \"sp-pod\" (UID: \"22be6582-22f1-4387-9c48-5c21f5449d4d\") " pod="default/sp-pod"
	Dec 13 13:28:33 functional-017456 kubelet[4843]: I1213 13:28:33.028270    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvsnv\" (UniqueName: \"kubernetes.io/projected/9e437e62-6eb7-4bfc-9c15-50cd5c54ca27-kube-api-access-xvsnv\") pod \"mysql-7d7b65bc95-5ppq4\" (UID: \"9e437e62-6eb7-4bfc-9c15-50cd5c54ca27\") " pod="default/mysql-7d7b65bc95-5ppq4"
	Dec 13 13:28:33 functional-017456 kubelet[4843]: I1213 13:28:33.670271    4843 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.670249788 podStartE2EDuration="1.670249788s" podCreationTimestamp="2025-12-13 13:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:28:33.669864219 +0000 UTC m=+39.256369401" watchObservedRunningTime="2025-12-13 13:28:33.670249788 +0000 UTC m=+39.256754969"
	Dec 13 13:28:34 functional-017456 kubelet[4843]: I1213 13:28:34.506613    4843 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6142876a-2aec-4b91-8afd-7b2cc587252b" path="/var/lib/kubelet/pods/6142876a-2aec-4b91-8afd-7b2cc587252b/volumes"
	Dec 13 13:28:37 functional-017456 kubelet[4843]: I1213 13:28:37.557310    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4c7aad80-3395-4f9b-a96a-c629d22bdf94-test-volume\") pod \"busybox-mount\" (UID: \"4c7aad80-3395-4f9b-a96a-c629d22bdf94\") " pod="default/busybox-mount"
	Dec 13 13:28:37 functional-017456 kubelet[4843]: I1213 13:28:37.557511    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd8qm\" (UniqueName: \"kubernetes.io/projected/4c7aad80-3395-4f9b-a96a-c629d22bdf94-kube-api-access-bd8qm\") pod \"busybox-mount\" (UID: \"4c7aad80-3395-4f9b-a96a-c629d22bdf94\") " pod="default/busybox-mount"
	Dec 13 13:28:40 functional-017456 kubelet[4843]: I1213 13:28:40.376744    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2066d111-93dd-4c83-8877-ab5dca71fb6c-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-k88hv\" (UID: \"2066d111-93dd-4c83-8877-ab5dca71fb6c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-k88hv"
	Dec 13 13:28:40 functional-017456 kubelet[4843]: I1213 13:28:40.376794    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-672w9\" (UniqueName: \"kubernetes.io/projected/2066d111-93dd-4c83-8877-ab5dca71fb6c-kube-api-access-672w9\") pod \"dashboard-metrics-scraper-5565989548-k88hv\" (UID: \"2066d111-93dd-4c83-8877-ab5dca71fb6c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-k88hv"
	Dec 13 13:28:40 functional-017456 kubelet[4843]: I1213 13:28:40.376816    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4ac2fe21-7a33-4f7b-bbd9-df9bb425382b-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-2d58w\" (UID: \"4ac2fe21-7a33-4f7b-bbd9-df9bb425382b\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-2d58w"
	Dec 13 13:28:40 functional-017456 kubelet[4843]: I1213 13:28:40.376933    4843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjpd8\" (UniqueName: \"kubernetes.io/projected/4ac2fe21-7a33-4f7b-bbd9-df9bb425382b-kube-api-access-bjpd8\") pod \"kubernetes-dashboard-b84665fb8-2d58w\" (UID: \"4ac2fe21-7a33-4f7b-bbd9-df9bb425382b\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-2d58w"
	
	
	==> storage-provisioner [3823191bbd1fb2e58e800fb055b6090d7683b06a99d3c92b77fa6c0116eb49c1] <==
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-12-13 13:27:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f &PersistentVolumeClaim{ObjectMeta:{myclaim  default  cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f 701 0 2025-12-13 13:28:25 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-12-13 13:28:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-12-13 13:28:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1213 13:28:25.486187       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f" provisioned
	I1213 13:28:25.486213       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1213 13:28:25.486219       1 volume_store.go:212] Trying to save persistentvolume "pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f"
	I1213 13:28:25.494520       1 volume_store.go:219] persistentvolume "pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f" saved
	I1213 13:28:25.494694       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f", APIVersion:"v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-cd705fe3-ebe4-47e6-b10d-a874fbd9ce6f
	W1213 13:28:26.573957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:26.577891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:28.581209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:28.585121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:30.588749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:30.594063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:32.597976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:32.602769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:34.605917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:34.611839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:36.615995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:36.620482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:38.623923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:38.628203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:40.631755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:40.637374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:42.640565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:28:42.644895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c02e216c9a8c72235c62d4a45bf27816b9b0c68e68cdb55fd64027f2481551ff] <==
	W1213 13:27:12.294238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:12.297672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:27:12.392602       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-017456_1118886a-3932-48fc-a306-f9ac2ac6967a!
	W1213 13:27:14.304572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:14.313537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:16.316798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:16.320769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:18.325915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:18.330887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:20.334115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:20.339154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:22.342599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:22.346596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:24.349252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:24.354450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:26.358103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:26.361829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:28.364924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:28.370259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:30.373580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:30.377327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:32.380638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:32.384453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:34.387916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:27:34.392042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-017456 -n functional-017456
helpers_test.go:270: (dbg) Run:  kubectl --context functional-017456 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount mysql-7d7b65bc95-5ppq4 dashboard-metrics-scraper-5565989548-k88hv kubernetes-dashboard-b84665fb8-2d58w
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-017456 describe pod busybox-mount mysql-7d7b65bc95-5ppq4 dashboard-metrics-scraper-5565989548-k88hv kubernetes-dashboard-b84665fb8-2d58w
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-017456 describe pod busybox-mount mysql-7d7b65bc95-5ppq4 dashboard-metrics-scraper-5565989548-k88hv kubernetes-dashboard-b84665fb8-2d58w: exit status 1 (88.595469ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-017456/192.168.49.2
	Start Time:       Sat, 13 Dec 2025 13:28:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bd8qm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-bd8qm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5s    default-scheduler  Successfully assigned default/busybox-mount to functional-017456
	  Normal  Pulling    6s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             mysql-7d7b65bc95-5ppq4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-017456/192.168.49.2
	Start Time:       Sat, 13 Dec 2025 13:28:32 +0000
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Container ID:   
	    Image:          public.ecr.aws/docker/library/mysql:8.4
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xvsnv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xvsnv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/mysql-7d7b65bc95-5ppq4 to functional-017456
	  Normal  Pulling    10s   kubelet            Pulling image "public.ecr.aws/docker/library/mysql:8.4"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-k88hv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-2d58w" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-017456 describe pod busybox-mount mysql-7d7b65bc95-5ppq4 dashboard-metrics-scraper-5565989548-k88hv kubernetes-dashboard-b84665fb8-2d58w: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (4.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (599.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-205521 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-205521 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.891051315s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-205521
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-205521: (6.144912135s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-205521 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-205521 status --format={{.Host}}: exit status 7 (102.275883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-205521 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-205521 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.282723122s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-205521 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-205521 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-205521 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (102.564128ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-205521] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-205521
	    minikube start -p kubernetes-upgrade-205521 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2055212 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-205521 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-205521 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-205521 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 80 (7m26.673997055s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-205521] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-205521" primary control-plane node in "kubernetes-upgrade-205521" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:50:06.693357  632701 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:50:06.693516  632701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:50:06.693526  632701 out.go:374] Setting ErrFile to fd 2...
	I1213 13:50:06.693533  632701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:50:06.693892  632701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:50:06.694455  632701 out.go:368] Setting JSON to false
	I1213 13:50:06.695949  632701 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9150,"bootTime":1765624657,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:50:06.696033  632701 start.go:143] virtualization: kvm guest
	I1213 13:50:06.698080  632701 out.go:179] * [kubernetes-upgrade-205521] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:50:06.699744  632701 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:50:06.699750  632701 notify.go:221] Checking for updates...
	I1213 13:50:06.701239  632701 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:50:06.702555  632701 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:50:06.703714  632701 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:50:06.704897  632701 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:50:06.706536  632701 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:50:06.708479  632701 config.go:182] Loaded profile config "kubernetes-upgrade-205521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 13:50:06.709463  632701 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:50:06.741628  632701 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:50:06.741816  632701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:50:06.817226  632701 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:79 SystemTime:2025-12-13 13:50:06.803996931 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:50:06.817409  632701 docker.go:319] overlay module found
	I1213 13:50:06.818921  632701 out.go:179] * Using the docker driver based on existing profile
	I1213 13:50:06.820038  632701 start.go:309] selected driver: docker
	I1213 13:50:06.820063  632701 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-205521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-205521 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:50:06.820177  632701 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:50:06.820950  632701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:50:06.882311  632701 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:79 SystemTime:2025-12-13 13:50:06.872308355 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:50:06.882653  632701 cni.go:84] Creating CNI manager for ""
	I1213 13:50:06.882735  632701 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:50:06.882792  632701 start.go:353] cluster config:
	{Name:kubernetes-upgrade-205521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-205521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:50:06.884496  632701 out.go:179] * Starting "kubernetes-upgrade-205521" primary control-plane node in "kubernetes-upgrade-205521" cluster
	I1213 13:50:06.885491  632701 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 13:50:06.886723  632701 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:50:06.887918  632701 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 13:50:06.887966  632701 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I1213 13:50:06.887981  632701 cache.go:65] Caching tarball of preloaded images
	I1213 13:50:06.888016  632701 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:50:06.888087  632701 preload.go:238] Found /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 13:50:06.888103  632701 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 13:50:06.888217  632701 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/config.json ...
	I1213 13:50:06.909156  632701 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:50:06.909180  632701 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:50:06.909208  632701 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:50:06.909246  632701 start.go:360] acquireMachinesLock for kubernetes-upgrade-205521: {Name:mk6b0beed7dbffce6ba125f432513a02d3aef028 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:50:06.909382  632701 start.go:364] duration metric: took 112.226µs to acquireMachinesLock for "kubernetes-upgrade-205521"
	I1213 13:50:06.909412  632701 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:50:06.909420  632701 fix.go:54] fixHost starting: 
	I1213 13:50:06.909710  632701 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205521 --format={{.State.Status}}
	I1213 13:50:06.932142  632701 fix.go:112] recreateIfNeeded on kubernetes-upgrade-205521: state=Running err=<nil>
	W1213 13:50:06.932216  632701 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:50:06.934132  632701 out.go:252] * Updating the running docker "kubernetes-upgrade-205521" container ...
	I1213 13:50:06.934183  632701 machine.go:94] provisionDockerMachine start ...
	I1213 13:50:06.934270  632701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205521
	I1213 13:50:06.957404  632701 main.go:143] libmachine: Using SSH client type: native
	I1213 13:50:06.957746  632701 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33382 <nil> <nil>}
	I1213 13:50:06.957761  632701 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:50:07.106282  632701 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-205521
	
	I1213 13:50:07.106328  632701 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-205521"
	I1213 13:50:07.106397  632701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205521
	I1213 13:50:07.126709  632701 main.go:143] libmachine: Using SSH client type: native
	I1213 13:50:07.127011  632701 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33382 <nil> <nil>}
	I1213 13:50:07.127033  632701 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-205521 && echo "kubernetes-upgrade-205521" | sudo tee /etc/hostname
	I1213 13:50:07.271456  632701 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-205521
	
	I1213 13:50:07.271539  632701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205521
	I1213 13:50:07.291702  632701 main.go:143] libmachine: Using SSH client type: native
	I1213 13:50:07.292047  632701 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33382 <nil> <nil>}
	I1213 13:50:07.292074  632701 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-205521' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-205521/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-205521' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:50:07.438299  632701 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:50:07.438345  632701 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-401936/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-401936/.minikube}
	I1213 13:50:07.438379  632701 ubuntu.go:190] setting up certificates
	I1213 13:50:07.438393  632701 provision.go:84] configureAuth start
	I1213 13:50:07.438450  632701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-205521
	I1213 13:50:07.457876  632701 provision.go:143] copyHostCerts
	I1213 13:50:07.457936  632701 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-401936/.minikube/key.pem, removing ...
	I1213 13:50:07.457949  632701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-401936/.minikube/key.pem
	I1213 13:50:07.458007  632701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-401936/.minikube/key.pem (1675 bytes)
	I1213 13:50:07.458109  632701 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-401936/.minikube/ca.pem, removing ...
	I1213 13:50:07.458115  632701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.pem
	I1213 13:50:07.458145  632701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-401936/.minikube/ca.pem (1078 bytes)
	I1213 13:50:07.458210  632701 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-401936/.minikube/cert.pem, removing ...
	I1213 13:50:07.458214  632701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-401936/.minikube/cert.pem
	I1213 13:50:07.458237  632701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-401936/.minikube/cert.pem (1123 bytes)
	I1213 13:50:07.458297  632701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-401936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-205521 san=[127.0.0.1 192.168.94.2 kubernetes-upgrade-205521 localhost minikube]
	I1213 13:50:07.568262  632701 provision.go:177] copyRemoteCerts
	I1213 13:50:07.568352  632701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:50:07.568400  632701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205521
	I1213 13:50:07.596525  632701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33382 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/kubernetes-upgrade-205521/id_rsa Username:docker}
	I1213 13:50:07.706087  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 13:50:07.731838  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:50:07.754469  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:50:07.773789  632701 provision.go:87] duration metric: took 335.382134ms to configureAuth
	I1213 13:50:07.773817  632701 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:50:07.774007  632701 config.go:182] Loaded profile config "kubernetes-upgrade-205521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 13:50:07.774021  632701 machine.go:97] duration metric: took 839.82941ms to provisionDockerMachine
	I1213 13:50:07.774046  632701 start.go:293] postStartSetup for "kubernetes-upgrade-205521" (driver="docker")
	I1213 13:50:07.774060  632701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:50:07.774119  632701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:50:07.774174  632701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205521
	I1213 13:50:07.799965  632701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33382 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/kubernetes-upgrade-205521/id_rsa Username:docker}
	I1213 13:50:07.921806  632701 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:50:07.925657  632701 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:50:07.925691  632701 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:50:07.925704  632701 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-401936/.minikube/addons for local assets ...
	I1213 13:50:07.925759  632701 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-401936/.minikube/files for local assets ...
	I1213 13:50:07.925861  632701 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-401936/.minikube/files/etc/ssl/certs/4055312.pem -> 4055312.pem in /etc/ssl/certs
	I1213 13:50:07.925992  632701 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:50:07.935230  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/files/etc/ssl/certs/4055312.pem --> /etc/ssl/certs/4055312.pem (1708 bytes)
	I1213 13:50:07.956043  632701 start.go:296] duration metric: took 181.978477ms for postStartSetup
	I1213 13:50:07.956137  632701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:50:07.956191  632701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205521
	I1213 13:50:07.979836  632701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33382 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/kubernetes-upgrade-205521/id_rsa Username:docker}
	I1213 13:50:08.087242  632701 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:50:08.092697  632701 fix.go:56] duration metric: took 1.183271276s for fixHost
	I1213 13:50:08.092722  632701 start.go:83] releasing machines lock for "kubernetes-upgrade-205521", held for 1.183319956s
	I1213 13:50:08.092804  632701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-205521
	I1213 13:50:08.115000  632701 ssh_runner.go:195] Run: cat /version.json
	I1213 13:50:08.115053  632701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205521
	I1213 13:50:08.115074  632701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:50:08.115150  632701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205521
	I1213 13:50:08.137007  632701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33382 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/kubernetes-upgrade-205521/id_rsa Username:docker}
	I1213 13:50:08.137021  632701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33382 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/kubernetes-upgrade-205521/id_rsa Username:docker}
	I1213 13:50:08.314245  632701 ssh_runner.go:195] Run: systemctl --version
	I1213 13:50:08.321966  632701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:50:08.327616  632701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:50:08.327702  632701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:50:08.341115  632701 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:50:08.341243  632701 start.go:496] detecting cgroup driver to use...
	I1213 13:50:08.341298  632701 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:50:08.341438  632701 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 13:50:08.361757  632701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 13:50:08.379891  632701 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:50:08.379963  632701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:50:08.400715  632701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:50:08.419443  632701 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:50:08.543261  632701 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:50:08.668070  632701 docker.go:234] disabling docker service ...
	I1213 13:50:08.668161  632701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:50:08.685869  632701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:50:08.699534  632701 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:50:08.804213  632701 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:50:08.898995  632701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:50:08.915185  632701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:50:08.936186  632701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 13:50:08.946207  632701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 13:50:08.957966  632701 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1213 13:50:08.958059  632701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1213 13:50:08.969529  632701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 13:50:08.980449  632701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 13:50:08.992151  632701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 13:50:09.021437  632701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:50:09.032467  632701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 13:50:09.044022  632701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 13:50:09.054197  632701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 13:50:09.064195  632701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:50:09.072705  632701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:50:09.080956  632701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:50:09.187627  632701 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 13:50:09.299465  632701 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 13:50:09.299542  632701 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 13:50:09.304283  632701 start.go:564] Will wait 60s for crictl version
	I1213 13:50:09.304386  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:50:09.308571  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:50:09.334400  632701 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 13:50:09.334730  632701 ssh_runner.go:195] Run: containerd --version
	I1213 13:50:09.358618  632701 ssh_runner.go:195] Run: containerd --version
	I1213 13:50:09.388371  632701 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 13:50:09.389966  632701 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-205521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:50:09.409445  632701 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 13:50:09.414249  632701 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-205521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-205521 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:50:09.414393  632701 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 13:50:09.414442  632701 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:50:09.439655  632701 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 13:50:09.439749  632701 ssh_runner.go:195] Run: which lz4
	I1213 13:50:09.444268  632701 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 13:50:09.448500  632701 ssh_runner.go:356] copy: skipping /preloaded.tar.lz4 (exists)
	I1213 13:50:09.448527  632701 containerd.go:563] duration metric: took 4.30655ms to copy over tarball
	I1213 13:50:09.448584  632701 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 13:50:11.848636  632701 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.400021108s)
	I1213 13:50:11.848716  632701 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1213 13:50:11.848819  632701 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:50:11.884100  632701 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 13:50:11.884131  632701 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 13:50:11.884243  632701 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:50:11.884284  632701 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 13:50:11.884337  632701 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 13:50:11.884529  632701 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 13:50:11.884590  632701 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 13:50:11.884669  632701 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 13:50:11.884727  632701 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 13:50:11.884727  632701 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 13:50:11.885653  632701 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 13:50:11.885870  632701 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 13:50:11.885907  632701 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 13:50:11.885933  632701 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 13:50:11.885984  632701 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 13:50:11.885654  632701 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:50:11.886009  632701 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 13:50:11.886427  632701 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 13:50:12.059388  632701 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1"
	I1213 13:50:12.059462  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1213 13:50:12.066194  632701 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46"
	I1213 13:50:12.066271  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 13:50:12.085928  632701 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b"
	I1213 13:50:12.085979  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 13:50:12.092234  632701 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1213 13:50:12.092282  632701 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 13:50:12.092376  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:50:12.097513  632701 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1213 13:50:12.097554  632701 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 13:50:12.097602  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:50:12.102043  632701 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1213 13:50:12.102111  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1213 13:50:12.104245  632701 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139"
	I1213 13:50:12.104302  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1213 13:50:12.115103  632701 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1213 13:50:12.115155  632701 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 13:50:12.115196  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 13:50:12.115198  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:50:12.115280  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 13:50:12.130880  632701 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1213 13:50:12.130929  632701 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1213 13:50:12.130976  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:50:12.132513  632701 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1213 13:50:12.132554  632701 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 13:50:12.132599  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:50:12.146579  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 13:50:12.146645  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 13:50:12.146687  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 13:50:12.146698  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 13:50:12.146752  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 13:50:12.170673  632701 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc"
	I1213 13:50:12.170749  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 13:50:12.178202  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 13:50:12.258797  632701 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810"
	I1213 13:50:12.258867  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 13:50:13.192619  632701 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1213 13:50:13.192701  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:50:13.325362  632701 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0: (1.17865s)
	I1213 13:50:13.325401  632701 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 13:50:13.325484  632701 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1: (1.178716986s)
	I1213 13:50:13.325487  632701 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.178764993s)
	I1213 13:50:13.325513  632701 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1213 13:50:13.325521  632701 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0: (1.178804771s)
	I1213 13:50:13.325537  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 13:50:13.325570  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 13:50:13.325593  632701 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 13:50:13.325612  632701 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: (1.15484054s)
	I1213 13:50:13.325634  632701 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0: (1.14741131s)
	I1213 13:50:13.325650  632701 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1213 13:50:13.325663  632701 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1213 13:50:13.325680  632701 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 13:50:13.325718  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:50:13.325718  632701 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 13:50:13.325740  632701 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0: (1.066856888s)
	I1213 13:50:13.325773  632701 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1213 13:50:13.325808  632701 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 13:50:13.325815  632701 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1213 13:50:13.325849  632701 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:50:13.325851  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:50:13.325902  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:50:13.353521  632701 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 13:50:13.353551  632701 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.10.1 (exists)
	I1213 13:50:13.353568  632701 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 13:50:13.353614  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1213 13:50:13.354148  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 13:50:13.354190  632701 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.6.5-0 (exists)
	I1213 13:50:13.354153  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 13:50:13.354246  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 13:50:13.354337  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:50:14.727623  632701 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1: (1.373977631s)
	I1213 13:50:14.727717  632701 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1213 13:50:14.727750  632701 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1213 13:50:14.727818  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1213 13:50:14.727681  632701 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: (1.373439151s)
	I1213 13:50:14.727814  632701 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0: (1.373547415s)
	I1213 13:50:14.728083  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 13:50:14.728192  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 13:50:14.727847  632701 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.373479232s)
	I1213 13:50:14.728294  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:50:14.727873  632701 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1: (1.37370196s)
	I1213 13:50:14.728455  632701 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 13:50:15.709789  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 13:50:15.709798  632701 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1213 13:50:15.709934  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 13:50:15.710030  632701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:50:15.742879  632701 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 13:50:15.742908  632701 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 13:50:15.745289  632701 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 13:50:15.745408  632701 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1213 13:50:15.749823  632701 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1213 13:50:15.749846  632701 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 13:50:15.749899  632701 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1213 13:50:15.994472  632701 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 13:50:15.994551  632701 cache_images.go:94] duration metric: took 4.110396981s to LoadCachedImages
	W1213 13:50:15.994648  632701 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22122-401936/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0: no such file or directory
	I1213 13:50:15.994661  632701 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 13:50:15.994754  632701 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-205521 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-205521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:50:15.994822  632701 ssh_runner.go:195] Run: sudo crictl info
	I1213 13:50:16.024566  632701 cni.go:84] Creating CNI manager for ""
	I1213 13:50:16.024591  632701 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:50:16.024610  632701 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:50:16.024638  632701 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-205521 NodeName:kubernetes-upgrade-205521 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:50:16.024787  632701 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-205521"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:50:16.024863  632701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 13:50:16.033422  632701 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:50:16.033509  632701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:50:16.042457  632701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1213 13:50:16.057856  632701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:50:16.073158  632701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2244 bytes)
	I1213 13:50:16.087343  632701 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:50:16.091760  632701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:50:16.184820  632701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:50:16.200266  632701 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521 for IP: 192.168.94.2
	I1213 13:50:16.200291  632701 certs.go:195] generating shared ca certs ...
	I1213 13:50:16.200342  632701 certs.go:227] acquiring lock for ca certs: {Name:mk638ad0c55891f03a1600a7ef1d632862f1d7c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:50:16.200497  632701 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.key
	I1213 13:50:16.200536  632701 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.key
	I1213 13:50:16.200553  632701 certs.go:257] generating profile certs ...
	I1213 13:50:16.200657  632701 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.key
	I1213 13:50:16.200732  632701 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/apiserver.key.b0f914aa
	I1213 13:50:16.200785  632701 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/proxy-client.key
	I1213 13:50:16.200921  632701 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/405531.pem (1338 bytes)
	W1213 13:50:16.201027  632701 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-401936/.minikube/certs/405531_empty.pem, impossibly tiny 0 bytes
	I1213 13:50:16.201040  632701 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:50:16.201087  632701 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:50:16.201120  632701 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:50:16.201156  632701 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/certs/key.pem (1675 bytes)
	I1213 13:50:16.201224  632701 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-401936/.minikube/files/etc/ssl/certs/4055312.pem (1708 bytes)
	I1213 13:50:16.201889  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:50:16.221418  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 13:50:16.241963  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:50:16.266227  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 13:50:16.286051  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1213 13:50:16.305523  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 13:50:16.328451  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:50:16.346810  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:50:16.371999  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:50:16.396299  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/certs/405531.pem --> /usr/share/ca-certificates/405531.pem (1338 bytes)
	I1213 13:50:16.420171  632701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-401936/.minikube/files/etc/ssl/certs/4055312.pem --> /usr/share/ca-certificates/4055312.pem (1708 bytes)
	I1213 13:50:16.447340  632701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:50:16.467901  632701 ssh_runner.go:195] Run: openssl version
	I1213 13:50:16.476205  632701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:50:16.485435  632701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:50:16.494641  632701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:50:16.499587  632701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:50:16.499646  632701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:50:16.534804  632701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:50:16.542941  632701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/405531.pem
	I1213 13:50:16.551621  632701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/405531.pem /etc/ssl/certs/405531.pem
	I1213 13:50:16.561835  632701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/405531.pem
	I1213 13:50:16.570362  632701 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:26 /usr/share/ca-certificates/405531.pem
	I1213 13:50:16.570425  632701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/405531.pem
	I1213 13:50:16.621213  632701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:50:16.630524  632701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4055312.pem
	I1213 13:50:16.641983  632701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4055312.pem /etc/ssl/certs/4055312.pem
	I1213 13:50:16.652586  632701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4055312.pem
	I1213 13:50:16.658131  632701 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:26 /usr/share/ca-certificates/4055312.pem
	I1213 13:50:16.658196  632701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4055312.pem
	I1213 13:50:16.703586  632701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:50:16.712263  632701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:50:16.716583  632701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:50:16.754188  632701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:50:16.791792  632701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:50:16.831068  632701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:50:16.878288  632701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:50:16.923119  632701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:50:16.968271  632701 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-205521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-205521 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:50:16.968399  632701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 13:50:16.968451  632701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:50:17.001373  632701 cri.go:89] found id: ""
	I1213 13:50:17.001447  632701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:50:17.011496  632701 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 13:50:17.011517  632701 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 13:50:17.011585  632701 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 13:50:17.022824  632701 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:50:17.023435  632701 kubeconfig.go:125] found "kubernetes-upgrade-205521" server: "https://192.168.94.2:8443"
	I1213 13:50:17.024077  632701 kapi.go:59] client config for kubernetes-upgrade-205521: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.key", CAFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 13:50:17.024537  632701 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 13:50:17.024554  632701 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 13:50:17.024561  632701 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 13:50:17.024569  632701 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 13:50:17.024575  632701 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 13:50:17.024929  632701 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 13:50:17.034133  632701 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1213 13:50:17.034167  632701 kubeadm.go:602] duration metric: took 22.644723ms to restartPrimaryControlPlane
	I1213 13:50:17.034176  632701 kubeadm.go:403] duration metric: took 65.92325ms to StartCluster
	I1213 13:50:17.034195  632701 settings.go:142] acquiring lock: {Name:mk71afd6e9758cc52371589a74f73214557044d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:50:17.034265  632701 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:50:17.035426  632701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/kubeconfig: {Name:mk743b5761bd946614fa12c7aa179660c36f36c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:50:17.035695  632701 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 13:50:17.035828  632701 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:50:17.035918  632701 config.go:182] Loaded profile config "kubernetes-upgrade-205521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 13:50:17.035931  632701 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-205521"
	I1213 13:50:17.035956  632701 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-205521"
	I1213 13:50:17.035960  632701 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-205521"
	I1213 13:50:17.035993  632701 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-205521"
	W1213 13:50:17.035968  632701 addons.go:248] addon storage-provisioner should already be in state true
	I1213 13:50:17.036089  632701 host.go:66] Checking if "kubernetes-upgrade-205521" exists ...
	I1213 13:50:17.036362  632701 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205521 --format={{.State.Status}}
	I1213 13:50:17.036576  632701 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205521 --format={{.State.Status}}
	I1213 13:50:17.038267  632701 out.go:179] * Verifying Kubernetes components...
	I1213 13:50:17.039529  632701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:50:17.058842  632701 kapi.go:59] client config for kubernetes-upgrade-205521: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.key", CAFile:"/home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 13:50:17.059290  632701 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-205521"
	W1213 13:50:17.059311  632701 addons.go:248] addon default-storageclass should already be in state true
	I1213 13:50:17.059354  632701 host.go:66] Checking if "kubernetes-upgrade-205521" exists ...
	I1213 13:50:17.059879  632701 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-205521 --format={{.State.Status}}
	I1213 13:50:17.060155  632701 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:50:17.061445  632701 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:50:17.061466  632701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:50:17.061517  632701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205521
	I1213 13:50:17.087491  632701 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:50:17.087521  632701 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:50:17.087588  632701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-205521
	I1213 13:50:17.091221  632701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33382 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/kubernetes-upgrade-205521/id_rsa Username:docker}
	I1213 13:50:17.110587  632701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33382 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/kubernetes-upgrade-205521/id_rsa Username:docker}
	I1213 13:50:17.163644  632701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:50:17.178789  632701 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:50:17.178865  632701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:50:17.191599  632701 api_server.go:72] duration metric: took 155.863675ms to wait for apiserver process to appear ...
	I1213 13:50:17.191630  632701 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:50:17.191653  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:17.205633  632701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:50:17.222075  632701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:50:19.197453  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:19.197492  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:19.197521  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:21.202976  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:21.203008  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:21.203023  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:23.209138  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:23.209186  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:23.209205  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:25.214613  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:25.214664  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:25.214685  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:27.219644  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:27.219734  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:27.219767  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:29.226156  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:29.226190  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:29.226213  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:31.231510  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:31.231543  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:31.231561  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:33.238273  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:33.238336  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:33.238361  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:35.244125  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:35.244163  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:35.244190  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:40.247144  632701 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 13:50:40.247181  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:45.250062  632701 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 13:50:45.250109  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:45.254084  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:45.254110  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:45.692800  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:45.697001  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:45.697033  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:46.192707  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:51.193924  632701 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 13:50:51.193971  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:56.198281  632701 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 13:50:56.198368  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:50:56.203397  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:50:56.203429  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:50:56.692035  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:51:01.696646  632701 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 13:51:01.696697  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:51:01.700935  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:51:01.700978  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:51:02.192685  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:51:02.197765  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:51:02.197789  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:51:02.692516  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:51:07.693072  632701 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 13:51:07.693110  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:51:12.693852  632701 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 13:51:12.693890  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:51:12.698104  632701 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:51:12.698134  632701 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:51:13.191748  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:51:18.193020  632701 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 13:51:18.193101  632701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 13:51:18.193176  632701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 13:56:17.500564  632701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6m0.294875203s)
	W1213 13:56:17.500625  632701 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	I1213 13:56:17.500646  632701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6m0.278541105s)
	W1213 13:56:17.500671  632701 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	I1213 13:56:17.500746  632701 ssh_runner.go:235] Completed: sudo crictl ps -a --quiet --name=kube-apiserver: (4m59.307536818s)
	I1213 13:56:17.500780  632701 cri.go:89] found id: "f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25"
	I1213 13:56:17.500788  632701 cri.go:89] found id: ""
	I1213 13:56:17.500800  632701 logs.go:282] 1 containers: [f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25]
	W1213 13:56:17.500828  632701 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	W1213 13:56:17.500834  632701 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	I1213 13:56:17.500863  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:56:17.502813  632701 out.go:179] * Enabled addons: 
	I1213 13:56:17.503952  632701 addons.go:530] duration metric: took 6m0.468128572s for enable addons: enabled=[]
	I1213 13:56:17.505072  632701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 13:56:17.505148  632701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 13:56:17.531496  632701 cri.go:89] found id: "9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2"
	I1213 13:56:17.531527  632701 cri.go:89] found id: ""
	I1213 13:56:17.531538  632701 logs.go:282] 1 containers: [9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2]
	I1213 13:56:17.531608  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:56:17.535651  632701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 13:56:17.535719  632701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 13:56:17.563188  632701 cri.go:89] found id: ""
	I1213 13:56:17.563213  632701 logs.go:282] 0 containers: []
	W1213 13:56:17.563222  632701 logs.go:284] No container was found matching "coredns"
	I1213 13:56:17.563229  632701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 13:56:17.563284  632701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 13:56:17.590514  632701 cri.go:89] found id: "87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5"
	I1213 13:56:17.590539  632701 cri.go:89] found id: ""
	I1213 13:56:17.590549  632701 logs.go:282] 1 containers: [87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5]
	I1213 13:56:17.590599  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:56:17.594388  632701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 13:56:17.594465  632701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 13:56:17.621026  632701 cri.go:89] found id: ""
	I1213 13:56:17.621050  632701 logs.go:282] 0 containers: []
	W1213 13:56:17.621059  632701 logs.go:284] No container was found matching "kube-proxy"
	I1213 13:56:17.621067  632701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 13:56:17.621118  632701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 13:56:17.647034  632701 cri.go:89] found id: "a2b369f74009ca47aa6cbb3c8ef1c63a92b9443c04f7fd5ff652babeeae894b1"
	I1213 13:56:17.647060  632701 cri.go:89] found id: ""
	I1213 13:56:17.647077  632701 logs.go:282] 1 containers: [a2b369f74009ca47aa6cbb3c8ef1c63a92b9443c04f7fd5ff652babeeae894b1]
	I1213 13:56:17.647139  632701 ssh_runner.go:195] Run: which crictl
	I1213 13:56:17.651080  632701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 13:56:17.651157  632701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 13:56:17.677171  632701 cri.go:89] found id: ""
	I1213 13:56:17.677198  632701 logs.go:282] 0 containers: []
	W1213 13:56:17.677207  632701 logs.go:284] No container was found matching "kindnet"
	I1213 13:56:17.677213  632701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 13:56:17.677272  632701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 13:56:17.703612  632701 cri.go:89] found id: ""
	I1213 13:56:17.703645  632701 logs.go:282] 0 containers: []
	W1213 13:56:17.703659  632701 logs.go:284] No container was found matching "storage-provisioner"
	I1213 13:56:17.703682  632701 logs.go:123] Gathering logs for describe nodes ...
	I1213 13:56:17.703704  632701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 13:57:17.768399  632701 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.064664819s)
	W1213 13:57:17.768449  632701 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1213 13:57:17.768465  632701 logs.go:123] Gathering logs for etcd [9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2] ...
	I1213 13:57:17.768481  632701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2"
	I1213 13:57:17.803904  632701 logs.go:123] Gathering logs for kube-controller-manager [a2b369f74009ca47aa6cbb3c8ef1c63a92b9443c04f7fd5ff652babeeae894b1] ...
	I1213 13:57:17.803939  632701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2b369f74009ca47aa6cbb3c8ef1c63a92b9443c04f7fd5ff652babeeae894b1"
	I1213 13:57:17.833517  632701 logs.go:123] Gathering logs for containerd ...
	I1213 13:57:17.833545  632701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 13:57:17.914028  632701 logs.go:123] Gathering logs for kubelet ...
	I1213 13:57:17.914071  632701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 13:57:17.944246  632701 logs.go:138] Found kubelet problem: Dec 13 13:50:03 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:50:03.276676    1217 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-kubernetes-upgrade-205521\" is forbidden: User \"system:node:kubernetes-upgrade-205521\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-205521' and this object" podUID="fa0fd1f01dd0b00869f76da422a71e29" pod="kube-system/kube-controller-manager-kubernetes-upgrade-205521"
	W1213 13:57:17.944483  632701 logs.go:138] Found kubelet problem: Dec 13 13:50:03 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:50:03.284036    1217 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-205521\" is forbidden: User \"system:node:kubernetes-upgrade-205521\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-205521' and this object" podUID="cec684ae05acce064a01c92d0ad458c8" pod="kube-system/kube-scheduler-kubernetes-upgrade-205521"
	W1213 13:57:17.944621  632701 logs.go:138] Found kubelet problem: Dec 13 13:50:03 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:50:03.293269    1217 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-205521\" is forbidden: User \"system:node:kubernetes-upgrade-205521\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-205521' and this object" podUID="bef4f296aff9f8f2fdde046481add839" pod="kube-system/etcd-kubernetes-upgrade-205521"
	I1213 13:57:18.005373  632701 logs.go:123] Gathering logs for kube-apiserver [f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25] ...
	I1213 13:57:18.005414  632701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25"
	W1213 13:57:18.032525  632701 logs.go:130] failed kube-apiserver [f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25": Process exited with status 1
	stdout:
	
	stderr:
	E1213 13:57:18.029452    3489 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25\": not found" containerID="f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25"
	time="2025-12-13T13:57:18Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25\": not found"
	 output: 
	** stderr ** 
	E1213 13:57:18.029452    3489 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25\": not found" containerID="f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25"
	time="2025-12-13T13:57:18Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"f4c6e6ab7cffa52df32ee2633b832b2dbfd868871ed60cf0d6e5b742539dcd25\": not found"
	
	** /stderr **
	I1213 13:57:18.032552  632701 logs.go:123] Gathering logs for kube-scheduler [87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5] ...
	I1213 13:57:18.032567  632701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5"
	I1213 13:57:18.062558  632701 logs.go:123] Gathering logs for container status ...
	I1213 13:57:18.062587  632701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 13:57:18.093183  632701 logs.go:123] Gathering logs for dmesg ...
	I1213 13:57:18.093217  632701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 13:57:18.109254  632701 out.go:374] Setting ErrFile to fd 2...
	I1213 13:57:18.109282  632701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1213 13:57:18.109369  632701 out.go:285] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1213 13:57:18.109385  632701 out.go:285]   Dec 13 13:50:03 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:50:03.276676    1217 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-kubernetes-upgrade-205521\" is forbidden: User \"system:node:kubernetes-upgrade-205521\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-205521' and this object" podUID="fa0fd1f01dd0b00869f76da422a71e29" pod="kube-system/kube-controller-manager-kubernetes-upgrade-205521"
	  Dec 13 13:50:03 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:50:03.276676    1217 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-kubernetes-upgrade-205521\" is forbidden: User \"system:node:kubernetes-upgrade-205521\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-205521' and this object" podUID="fa0fd1f01dd0b00869f76da422a71e29" pod="kube-system/kube-controller-manager-kubernetes-upgrade-205521"
	W1213 13:57:18.109395  632701 out.go:285]   Dec 13 13:50:03 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:50:03.284036    1217 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-205521\" is forbidden: User \"system:node:kubernetes-upgrade-205521\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-205521' and this object" podUID="cec684ae05acce064a01c92d0ad458c8" pod="kube-system/kube-scheduler-kubernetes-upgrade-205521"
	  Dec 13 13:50:03 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:50:03.284036    1217 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-205521\" is forbidden: User \"system:node:kubernetes-upgrade-205521\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-205521' and this object" podUID="cec684ae05acce064a01c92d0ad458c8" pod="kube-system/kube-scheduler-kubernetes-upgrade-205521"
	W1213 13:57:18.109408  632701 out.go:285]   Dec 13 13:50:03 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:50:03.293269    1217 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-205521\" is forbidden: User \"system:node:kubernetes-upgrade-205521\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-205521' and this object" podUID="bef4f296aff9f8f2fdde046481add839" pod="kube-system/etcd-kubernetes-upgrade-205521"
	  Dec 13 13:50:03 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:50:03.293269    1217 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-205521\" is forbidden: User \"system:node:kubernetes-upgrade-205521\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-205521' and this object" podUID="bef4f296aff9f8f2fdde046481add839" pod="kube-system/etcd-kubernetes-upgrade-205521"
	I1213 13:57:18.109415  632701 out.go:374] Setting ErrFile to fd 2...
	I1213 13:57:18.109425  632701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:57:28.113519  632701 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 13:57:33.114758  632701 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 13:57:33.148917  632701 out.go:203] 
	W1213 13:57:33.153403  632701 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1213 13:57:33.153430  632701 out.go:285] * 
	* 
	W1213 13:57:33.155499  632701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:57:33.174435  632701 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-205521 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-12-13 13:57:33.37138986 +0000 UTC m=+3174.320991179
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-205521
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-205521:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1b21a355092c1ff1f3ccaf983c681c946bcef730e82153509ee2a4533ebe3ef",
	        "Created": "2025-12-13T13:49:20.205777415Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 624847,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:49:43.555049729Z",
	            "FinishedAt": "2025-12-13T13:49:42.632079844Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/f1b21a355092c1ff1f3ccaf983c681c946bcef730e82153509ee2a4533ebe3ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1b21a355092c1ff1f3ccaf983c681c946bcef730e82153509ee2a4533ebe3ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1b21a355092c1ff1f3ccaf983c681c946bcef730e82153509ee2a4533ebe3ef/hosts",
	        "LogPath": "/var/lib/docker/containers/f1b21a355092c1ff1f3ccaf983c681c946bcef730e82153509ee2a4533ebe3ef/f1b21a355092c1ff1f3ccaf983c681c946bcef730e82153509ee2a4533ebe3ef-json.log",
	        "Name": "/kubernetes-upgrade-205521",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-205521:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-205521",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f1b21a355092c1ff1f3ccaf983c681c946bcef730e82153509ee2a4533ebe3ef",
	                "LowerDir": "/var/lib/docker/overlay2/7264e9430685337c566b2c3bb21b8a51cf35b693ff024c4f958d996020042fb8-init/diff:/var/lib/docker/overlay2/be5aa5e3490e76c6aea57ece480ce7168b4c08e9f5040b5571a6aeb87c809618/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7264e9430685337c566b2c3bb21b8a51cf35b693ff024c4f958d996020042fb8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7264e9430685337c566b2c3bb21b8a51cf35b693ff024c4f958d996020042fb8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7264e9430685337c566b2c3bb21b8a51cf35b693ff024c4f958d996020042fb8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-205521",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-205521/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-205521",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-205521",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-205521",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6441b971e04f3f97aa560b041fd00d48bf463406ce29ba619487a82833392d29",
	            "SandboxKey": "/var/run/docker/netns/6441b971e04f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33382"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ]
	            },
	            "Networks": {
	                "kubernetes-upgrade-205521": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e245f9445c0f38edb1d936fcd0e6322fd5f0c56d0e37ebff0ff02b19db5d83fc",
	                    "EndpointID": "d6bd550372d00920dff15b60d6ed669bc24b87fedf4bc653a5bd41d6ed7d084e",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "92:b0:a6:6c:76:e7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-205521",
	                        "f1b21a355092"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-205521 -n kubernetes-upgrade-205521
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-205521 -n kubernetes-upgrade-205521: exit status 2 (15.780040698s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-205521 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-205521 logs -n 25: (1m1.026003344s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-603819 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-603819                │ jenkins │ v1.37.0 │ 13 Dec 25 13:55 UTC │                     │
	│ ssh     │ -p cilium-603819 sudo crio config                                                                                                                                                                                                                   │ cilium-603819                │ jenkins │ v1.37.0 │ 13 Dec 25 13:55 UTC │                     │
	│ delete  │ -p cilium-603819                                                                                                                                                                                                                                    │ cilium-603819                │ jenkins │ v1.37.0 │ 13 Dec 25 13:55 UTC │ 13 Dec 25 13:55 UTC │
	│ start   │ -p no-preload-173346 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-173346            │ jenkins │ v1.37.0 │ 13 Dec 25 13:55 UTC │ 13 Dec 25 13:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-759693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-759693       │ jenkins │ v1.37.0 │ 13 Dec 25 13:56 UTC │ 13 Dec 25 13:56 UTC │
	│ stop    │ -p old-k8s-version-759693 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-759693       │ jenkins │ v1.37.0 │ 13 Dec 25 13:56 UTC │ 13 Dec 25 13:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-173346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-173346            │ jenkins │ v1.37.0 │ 13 Dec 25 13:56 UTC │ 13 Dec 25 13:56 UTC │
	│ stop    │ -p no-preload-173346 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-173346            │ jenkins │ v1.37.0 │ 13 Dec 25 13:56 UTC │ 13 Dec 25 13:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-759693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-759693       │ jenkins │ v1.37.0 │ 13 Dec 25 13:56 UTC │ 13 Dec 25 13:56 UTC │
	│ start   │ -p old-k8s-version-759693 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-759693       │ jenkins │ v1.37.0 │ 13 Dec 25 13:56 UTC │ 13 Dec 25 13:57 UTC │
	│ addons  │ enable dashboard -p no-preload-173346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-173346            │ jenkins │ v1.37.0 │ 13 Dec 25 13:56 UTC │ 13 Dec 25 13:56 UTC │
	│ start   │ -p no-preload-173346 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-173346            │ jenkins │ v1.37.0 │ 13 Dec 25 13:56 UTC │ 13 Dec 25 13:57 UTC │
	│ image   │ old-k8s-version-759693 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-759693       │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ pause   │ -p old-k8s-version-759693 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-759693       │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ unpause │ -p old-k8s-version-759693 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-759693       │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ delete  │ -p old-k8s-version-759693                                                                                                                                                                                                                           │ old-k8s-version-759693       │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ delete  │ -p old-k8s-version-759693                                                                                                                                                                                                                           │ old-k8s-version-759693       │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ start   │ -p embed-certs-871380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                        │ embed-certs-871380           │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │                     │
	│ image   │ no-preload-173346 image list --format=json                                                                                                                                                                                                          │ no-preload-173346            │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ pause   │ -p no-preload-173346 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-173346            │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ unpause │ -p no-preload-173346 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-173346            │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ delete  │ -p no-preload-173346                                                                                                                                                                                                                                │ no-preload-173346            │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ delete  │ -p no-preload-173346                                                                                                                                                                                                                                │ no-preload-173346            │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ delete  │ -p disable-driver-mounts-909187                                                                                                                                                                                                                     │ disable-driver-mounts-909187 │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │ 13 Dec 25 13:57 UTC │
	│ start   │ -p default-k8s-diff-port-264183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-264183 │ jenkins │ v1.37.0 │ 13 Dec 25 13:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:57:42
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:57:42.378991  713947 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:57:42.379129  713947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:57:42.379140  713947 out.go:374] Setting ErrFile to fd 2...
	I1213 13:57:42.379144  713947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:57:42.379375  713947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:57:42.379856  713947 out.go:368] Setting JSON to false
	I1213 13:57:42.381151  713947 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9605,"bootTime":1765624657,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:57:42.381208  713947 start.go:143] virtualization: kvm guest
	I1213 13:57:42.383474  713947 out.go:179] * [default-k8s-diff-port-264183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:57:42.384830  713947 notify.go:221] Checking for updates...
	I1213 13:57:42.384849  713947 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:57:42.386269  713947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:57:42.387595  713947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:57:42.388929  713947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:57:42.390307  713947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:57:42.391523  713947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:57:42.393066  713947 config.go:182] Loaded profile config "cert-expiration-913044": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:57:42.393219  713947 config.go:182] Loaded profile config "embed-certs-871380": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:57:42.393347  713947 config.go:182] Loaded profile config "kubernetes-upgrade-205521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 13:57:42.393489  713947 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:57:42.419093  713947 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:57:42.419235  713947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:57:42.478402  713947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:57:42.466474919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:57:42.478553  713947 docker.go:319] overlay module found
	I1213 13:57:42.480775  713947 out.go:179] * Using the docker driver based on user configuration
	I1213 13:57:42.481910  713947 start.go:309] selected driver: docker
	I1213 13:57:42.481929  713947 start.go:927] validating driver "docker" against <nil>
	I1213 13:57:42.481942  713947 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:57:42.482532  713947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:57:42.540867  713947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:57:42.531130118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:57:42.541112  713947 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:57:42.541449  713947 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:57:42.543305  713947 out.go:179] * Using Docker driver with root privileges
	I1213 13:57:42.544497  713947 cni.go:84] Creating CNI manager for ""
	I1213 13:57:42.544597  713947 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:57:42.544613  713947 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:57:42.544717  713947 start.go:353] cluster config:
	{Name:default-k8s-diff-port-264183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-264183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:57:42.546204  713947 out.go:179] * Starting "default-k8s-diff-port-264183" primary control-plane node in "default-k8s-diff-port-264183" cluster
	I1213 13:57:42.547412  713947 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 13:57:42.548750  713947 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:57:42.549812  713947 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 13:57:42.549857  713947 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1213 13:57:42.549871  713947 cache.go:65] Caching tarball of preloaded images
	I1213 13:57:42.549913  713947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:57:42.549993  713947 preload.go:238] Found /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 13:57:42.550014  713947 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 13:57:42.550150  713947 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/default-k8s-diff-port-264183/config.json ...
	I1213 13:57:42.550180  713947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/default-k8s-diff-port-264183/config.json: {Name:mkdd3099464ef5cab3bc592354b03bad428fe4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:57:42.571872  713947 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:57:42.571899  713947 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:57:42.571927  713947 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:57:42.571959  713947 start.go:360] acquireMachinesLock for default-k8s-diff-port-264183: {Name:mke537c0a7414c27f38876eec20fb8af67ab071e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:57:42.572064  713947 start.go:364] duration metric: took 87.027µs to acquireMachinesLock for "default-k8s-diff-port-264183"
	I1213 13:57:42.572086  713947 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-264183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-264183 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 13:57:42.572152  713947 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:57:39.861962  710237 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:57:39.862180  710237 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-871380 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 13:57:40.474705  710237 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:57:40.474846  710237 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-871380 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 13:57:40.655747  710237 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:57:40.935074  710237 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:57:41.421636  710237 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:57:41.421752  710237 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:57:41.459217  710237 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:57:41.669334  710237 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:57:42.422391  710237 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:57:42.653308  710237 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:57:42.709305  710237 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:57:42.709925  710237 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:57:42.714799  710237 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:57:42.717893  710237 out.go:252]   - Booting up control plane ...
	I1213 13:57:42.718036  710237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:57:42.718140  710237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:57:42.718238  710237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:57:42.739233  710237 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:57:42.739397  710237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:57:42.747731  710237 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:57:42.748974  710237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:57:42.749050  710237 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:57:42.863552  710237 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:57:42.863846  710237 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 13:57:43.364811  710237 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.902557ms
	I1213 13:57:43.369138  710237 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:57:43.369266  710237 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 13:57:43.369457  710237 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:57:43.369579  710237 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:57:42.574954  713947 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 13:57:42.575175  713947 start.go:159] libmachine.API.Create for "default-k8s-diff-port-264183" (driver="docker")
	I1213 13:57:42.575207  713947 client.go:173] LocalClient.Create starting
	I1213 13:57:42.575258  713947 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-401936/.minikube/certs/ca.pem
	I1213 13:57:42.575302  713947 main.go:143] libmachine: Decoding PEM data...
	I1213 13:57:42.575350  713947 main.go:143] libmachine: Parsing certificate...
	I1213 13:57:42.575415  713947 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-401936/.minikube/certs/cert.pem
	I1213 13:57:42.575444  713947 main.go:143] libmachine: Decoding PEM data...
	I1213 13:57:42.575456  713947 main.go:143] libmachine: Parsing certificate...
	I1213 13:57:42.575785  713947 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-264183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:57:42.592913  713947 cli_runner.go:211] docker network inspect default-k8s-diff-port-264183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:57:42.592987  713947 network_create.go:284] running [docker network inspect default-k8s-diff-port-264183] to gather additional debugging logs...
	I1213 13:57:42.593009  713947 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-264183
	W1213 13:57:42.610915  713947 cli_runner.go:211] docker network inspect default-k8s-diff-port-264183 returned with exit code 1
	I1213 13:57:42.610951  713947 network_create.go:287] error running [docker network inspect default-k8s-diff-port-264183]: docker network inspect default-k8s-diff-port-264183: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-264183 not found
	I1213 13:57:42.610973  713947 network_create.go:289] output of [docker network inspect default-k8s-diff-port-264183]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-264183 not found
	
	** /stderr **
	I1213 13:57:42.611133  713947 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:57:42.630037  713947 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dd549186b5b6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:8b:bc:e4:2d:3c} reservation:<nil>}
	I1213 13:57:42.630866  713947 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5734ddcf37ca IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:c9:43:86:d0:bf} reservation:<nil>}
	I1213 13:57:42.631443  713947 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-43a912dfaa3b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:2d:36:60:9c:91} reservation:<nil>}
	I1213 13:57:42.632209  713947 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d15029961867 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:83:16:f1:3e:0d} reservation:<nil>}
	I1213 13:57:42.632830  713947 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c7a050687b65 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:92:80:a9:a7:14:bb} reservation:<nil>}
	I1213 13:57:42.633304  713947 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e245f9445c0f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:82:16:f9:63:b0:c4} reservation:<nil>}
	I1213 13:57:42.634159  713947 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f01ac0}
	I1213 13:57:42.634180  713947 network_create.go:124] attempt to create docker network default-k8s-diff-port-264183 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1213 13:57:42.634226  713947 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-264183 default-k8s-diff-port-264183
	I1213 13:57:42.683253  713947 network_create.go:108] docker network default-k8s-diff-port-264183 192.168.103.0/24 created
	I1213 13:57:42.683287  713947 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-264183" container
	I1213 13:57:42.683395  713947 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:57:42.703660  713947 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-264183 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-264183 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:57:42.725920  713947 oci.go:103] Successfully created a docker volume default-k8s-diff-port-264183
	I1213 13:57:42.726028  713947 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-264183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-264183 --entrypoint /usr/bin/test -v default-k8s-diff-port-264183:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:57:43.133654  713947 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-264183
	I1213 13:57:43.133743  713947 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 13:57:43.133757  713947 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:57:43.133850  713947 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-264183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:57:47.322120  713947 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-264183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.188219437s)
	I1213 13:57:47.322165  713947 kic.go:203] duration metric: took 4.188402019s to extract preloaded images to volume ...
	W1213 13:57:47.322349  713947 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:57:47.322415  713947 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:57:47.322474  713947 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:57:44.616508  710237 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.247137855s
	I1213 13:57:45.830404  710237 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.461216136s
	I1213 13:57:48.370631  710237 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001345833s
	I1213 13:57:48.387902  710237 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 13:57:48.400044  710237 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 13:57:48.409405  710237 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 13:57:48.409706  710237 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-871380 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 13:57:48.420777  710237 kubeadm.go:319] [bootstrap-token] Using token: tjbj36.uc7ik913sowxknnp
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                 NAMESPACE
	eef2ce790d8ca       aa9d02839d8de       About a minute ago   Exited              kube-apiserver            7                   e3cf910297482       kube-apiserver-kubernetes-upgrade-205521            kube-system
	156e8c58c1c64       45f3cc72d235f       About a minute ago   Running             kube-controller-manager   1                   0333c685c1e12       kube-controller-manager-kubernetes-upgrade-205521   kube-system
	a2b369f74009c       45f3cc72d235f       5 minutes ago        Exited              kube-controller-manager   0                   0333c685c1e12       kube-controller-manager-kubernetes-upgrade-205521   kube-system
	87b57156c8910       7bb6219ddab95       5 minutes ago        Running             kube-scheduler            0                   a86a36ee84182       kube-scheduler-kubernetes-upgrade-205521            kube-system
	9305862e4b5f7       a3e246e9556e9       6 minutes ago        Running             etcd                      0                   a867011861305       etcd-kubernetes-upgrade-205521                      kube-system
	
	
	==> containerd <==
	Dec 13 13:57:28 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:28.040807602Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da27c4db85deb86e6f69d9cc0b07f51.slice/cri-containerd-156e8c58c1c64ffc65036d6407b72e9c20db28c980b8c5e8a73afd3a477c1d7f.scope/hugetlb.1GB.events\""
	Dec 13 13:57:28 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:28.041666650Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2157c04b1ae5b0daaac812e91d03801.slice/cri-containerd-9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2.scope/hugetlb.2MB.events\""
	Dec 13 13:57:28 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:28.041764184Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2157c04b1ae5b0daaac812e91d03801.slice/cri-containerd-9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2.scope/hugetlb.1GB.events\""
	Dec 13 13:57:28 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:28.042658156Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ca9a8c49c1fe565634e17a3cb961eb.slice/cri-containerd-87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5.scope/hugetlb.2MB.events\""
	Dec 13 13:57:28 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:28.042772798Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ca9a8c49c1fe565634e17a3cb961eb.slice/cri-containerd-87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5.scope/hugetlb.1GB.events\""
	Dec 13 13:57:38 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:38.053251399Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da27c4db85deb86e6f69d9cc0b07f51.slice/cri-containerd-156e8c58c1c64ffc65036d6407b72e9c20db28c980b8c5e8a73afd3a477c1d7f.scope/hugetlb.2MB.events\""
	Dec 13 13:57:38 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:38.053377230Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da27c4db85deb86e6f69d9cc0b07f51.slice/cri-containerd-156e8c58c1c64ffc65036d6407b72e9c20db28c980b8c5e8a73afd3a477c1d7f.scope/hugetlb.1GB.events\""
	Dec 13 13:57:38 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:38.054345922Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2157c04b1ae5b0daaac812e91d03801.slice/cri-containerd-9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2.scope/hugetlb.2MB.events\""
	Dec 13 13:57:38 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:38.054452202Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2157c04b1ae5b0daaac812e91d03801.slice/cri-containerd-9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2.scope/hugetlb.1GB.events\""
	Dec 13 13:57:38 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:38.055289998Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ca9a8c49c1fe565634e17a3cb961eb.slice/cri-containerd-87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5.scope/hugetlb.2MB.events\""
	Dec 13 13:57:38 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:38.055443317Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ca9a8c49c1fe565634e17a3cb961eb.slice/cri-containerd-87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5.scope/hugetlb.1GB.events\""
	Dec 13 13:57:44 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:44.672648088Z" level=info msg="container event discarded" container=a86a36ee84182e2a35289b1c98862f81b0da72ef5254db80de94c472adba3f4e type=CONTAINER_CREATED_EVENT
	Dec 13 13:57:44 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:44.672764464Z" level=info msg="container event discarded" container=a86a36ee84182e2a35289b1c98862f81b0da72ef5254db80de94c472adba3f4e type=CONTAINER_STARTED_EVENT
	Dec 13 13:57:44 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:44.672786996Z" level=info msg="container event discarded" container=0333c685c1e12b1bbd75be7532f4774d83abcecb713fc2e63e7afdcf12d6d482 type=CONTAINER_CREATED_EVENT
	Dec 13 13:57:44 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:44.672800541Z" level=info msg="container event discarded" container=0333c685c1e12b1bbd75be7532f4774d83abcecb713fc2e63e7afdcf12d6d482 type=CONTAINER_STARTED_EVENT
	Dec 13 13:57:44 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:44.783084507Z" level=info msg="container event discarded" container=87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5 type=CONTAINER_CREATED_EVENT
	Dec 13 13:57:44 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:44.861500929Z" level=info msg="container event discarded" container=87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5 type=CONTAINER_STARTED_EVENT
	Dec 13 13:57:44 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:44.894779056Z" level=info msg="container event discarded" container=a2b369f74009ca47aa6cbb3c8ef1c63a92b9443c04f7fd5ff652babeeae894b1 type=CONTAINER_CREATED_EVENT
	Dec 13 13:57:44 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:44.979057957Z" level=info msg="container event discarded" container=a2b369f74009ca47aa6cbb3c8ef1c63a92b9443c04f7fd5ff652babeeae894b1 type=CONTAINER_STARTED_EVENT
	Dec 13 13:57:48 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:48.066243510Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2157c04b1ae5b0daaac812e91d03801.slice/cri-containerd-9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2.scope/hugetlb.2MB.events\""
	Dec 13 13:57:48 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:48.066439509Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2157c04b1ae5b0daaac812e91d03801.slice/cri-containerd-9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2.scope/hugetlb.1GB.events\""
	Dec 13 13:57:48 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:48.067670383Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ca9a8c49c1fe565634e17a3cb961eb.slice/cri-containerd-87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5.scope/hugetlb.2MB.events\""
	Dec 13 13:57:48 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:48.067820566Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ca9a8c49c1fe565634e17a3cb961eb.slice/cri-containerd-87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5.scope/hugetlb.1GB.events\""
	Dec 13 13:57:48 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:48.068714450Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da27c4db85deb86e6f69d9cc0b07f51.slice/cri-containerd-156e8c58c1c64ffc65036d6407b72e9c20db28c980b8c5e8a73afd3a477c1d7f.scope/hugetlb.2MB.events\""
	Dec 13 13:57:48 kubernetes-upgrade-205521 containerd[1941]: time="2025-12-13T13:57:48.068837454Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da27c4db85deb86e6f69d9cc0b07f51.slice/cri-containerd-156e8c58c1c64ffc65036d6407b72e9c20db28c980b8c5e8a73afd3a477c1d7f.scope/hugetlb.1GB.events\""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[ +15.550392] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 5b b2 4e f6 0c 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 3d 25 07 3f b0 08 06
	[Dec13 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	[  +0.000156] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 2b b1 e9 34 e9 08 06
	[  +9.601084] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[  +6.680640] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 7a 15 04 2e f9 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 26 9c 63 03 a8 a5 08 06
	[  +0.000500] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e bf e9 59 0c fc 08 06
	[ +14.220693] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 6b 48 e9 3e 65 08 06
	[  +0.000354] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 6b 2f 7c 08 35 08 06
	[ +17.192216] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 ce b1 a0 1c 7b 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 56 d0 e6 62 ca 08 06
	
	
	==> etcd [9305862e4b5f7874e796d34ea81ef7d8669a1b5f95876296661beaaabd7b50b2] <==
	{"level":"info","ts":"2025-12-13T13:50:56.864675Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-13T13:50:56.864722Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-13T13:50:56.864777Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-13T13:50:56.864792Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-13T13:50:56.864826Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 4"}
	{"level":"info","ts":"2025-12-13T13:50:56.866825Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 4"}
	{"level":"info","ts":"2025-12-13T13:50:56.866866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-13T13:50:56.866889Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 4"}
	{"level":"info","ts":"2025-12-13T13:50:56.866897Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 4"}
	{"level":"info","ts":"2025-12-13T13:50:56.867562Z","caller":"etcdserver/server.go:2425","msg":"updating cluster version using v3 API","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-12-13T13:50:56.868063Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:kubernetes-upgrade-205521 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-13T13:50:56.868069Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T13:50:56.868104Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T13:50:56.868242Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-13T13:50:56.868337Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-13T13:50:56.868458Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-12-13T13:50:56.868542Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-13T13:50:56.868578Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-13T13:50:56.868608Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-13T13:50:56.868709Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"warn","ts":"2025-12-13T13:50:56.869486Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-12-13T13:50:56.869551Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-13T13:50:56.869546Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-13T13:50:56.872195Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-13T13:50:56.872558Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:58:49 up  2:41,  0 user,  load average: 3.68, 3.07, 2.16
	Linux kubernetes-upgrade-205521 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [eef2ce790d8ca3799d73afae9e6eabd72663b64b287490d7dde0393c7c057dfb] <==
	I1213 13:56:29.694782       1 options.go:263] external host was not specified, using 192.168.94.2
	I1213 13:56:29.697563       1 server.go:150] Version: v1.35.0-beta.0
	I1213 13:56:29.697596       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1213 13:56:29.697946       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use"
	
	
	==> kube-controller-manager [156e8c58c1c64ffc65036d6407b72e9c20db28c980b8c5e8a73afd3a477c1d7f] <==
	I1213 13:56:21.593849       1 serving.go:386] Generated self-signed cert in-memory
	I1213 13:56:21.601423       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1213 13:56:21.601519       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:56:21.603396       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 13:56:21.603396       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 13:56:21.603575       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 13:56:21.603596       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [a2b369f74009ca47aa6cbb3c8ef1c63a92b9443c04f7fd5ff652babeeae894b1] <==
	I1213 13:52:45.140712       1 serving.go:386] Generated self-signed cert in-memory
	I1213 13:52:45.147659       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1213 13:52:45.147680       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:52:45.149092       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 13:52:45.149133       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 13:52:45.149228       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 13:52:45.149385       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 13:55:46.159586       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: the server was unable to return a response in the time allotted, but may still be processing the request"
	
	
	==> kube-scheduler [87b57156c8910e6a278f4c3d6e5c85fb4c959ef1e42013e56c28f1af1bfd16e5] <==
	I1213 13:53:44.998216       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:53:44.998331       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 13:53:44.998392       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 13:53:45.003442       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.005812       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.005812       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006250       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006515       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006525       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006564       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006589       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006603       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006610       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006629       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006629       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.007883       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006675       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006645       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006708       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006725       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006819       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	E1213 13:53:45.006693       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="a watch stream was requested by the client but the required storage feature RequestWatchProgress is disabled"
	I1213 13:53:45.098810       1 shared_informer.go:377] "Caches are synced"
	E1213 13:54:19.102255       1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{storage-provisioner.1880caca856b4bbb  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-12-13 13:53:45.099830071 +0000 UTC m=+60.247972745,Series:nil,ReportingController:default-scheduler,ReportingInstance:default-scheduler-kubernetes-upgrade-205521,Action:Scheduling,Reason:FailedScheduling,Regarding:{Pod kube-system storage-provisioner 60324dd4-c475-465a-9af2-b3488345535d v1 370 },Related:nil,Note:0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.,Type:Warning,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
	E1213 13:54:19.102532       1 pod_status_patch.go:110] "Failed to patch pod status" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/storage-provisioner"
	
	
	==> kubelet <==
	Dec 13 13:58:05 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:05.985254    1217 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-205521\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-205521?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 13:58:06 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:06.595034    1217 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-205521?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 13 13:58:07 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:07.559123    1217 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-kubernetes-upgrade-205521" containerName="kube-apiserver"
	Dec 13 13:58:07 kubernetes-upgrade-205521 kubelet[1217]: I1213 13:58:07.559158    1217 scope.go:122] "RemoveContainer" containerID="eef2ce790d8ca3799d73afae9e6eabd72663b64b287490d7dde0393c7c057dfb"
	Dec 13 13:58:07 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:07.559292    1217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-205521_kube-system(9c7060135623f8a41d98707d4017b77c)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-205521" podUID="9c7060135623f8a41d98707d4017b77c"
	Dec 13 13:58:07 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:07.583068    1217 status_manager.go:1045] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-kubernetes-upgrade-205521)" podUID="8da27c4db85deb86e6f69d9cc0b07f51" pod="kube-system/kube-controller-manager-kubernetes-upgrade-205521"
	Dec 13 13:58:07 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:07.711421    1217 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 13 13:58:12 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:12.042932    1217 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-kubernetes-upgrade-205521.1880caa08352c5de  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-kubernetes-upgrade-205521,UID:9c7060135623f8a41d98707d4017b77c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver in pod kube-apiserver-kubernetes-upgrade-205521_kube-system(9c7060135623f8a41d98707d4017b77c),Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-205521,},FirstTimestamp:2025-12-13 13:50:44.676052446 +0000 UTC m=+47.198195288,LastTimestamp:2025-12-13 13:50:49.316621261 +0000 UTC m=+51.838764095,Count:4,Type:
Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-205521,}"
	Dec 13 13:58:12 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:12.712764    1217 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 13 13:58:15 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:15.985922    1217 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-205521\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-205521?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 13:58:17 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:17.713990    1217 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 13 13:58:22 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:22.715913    1217 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 13 13:58:23 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:23.596721    1217 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-205521?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 13 13:58:25 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:25.986900    1217 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-205521\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-205521?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 13:58:25 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:25.986943    1217 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count"
	Dec 13 13:58:27 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:27.716990    1217 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 13 13:58:32 kubernetes-upgrade-205521 kubelet[1217]: I1213 13:58:32.559311    1217 kubelet.go:3323] "Trying to delete pod" pod="kube-system/etcd-kubernetes-upgrade-205521" podUID="30c37e88-049e-4c49-8d56-2104d8ca26f7"
	Dec 13 13:58:32 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:32.718796    1217 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 13 13:58:37 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:37.720415    1217 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 13 13:58:40 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:40.598002    1217 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-205521?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 13 13:58:42 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:42.722167    1217 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 13 13:58:46 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:46.044751    1217 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-kubernetes-upgrade-205521.1880caa04774595b  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-kubernetes-upgrade-205521,UID:9c7060135623f8a41d98707d4017b77c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" already present on machine and can be accessed by the pod,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-205521,},FirstTimestamp:2025-12-13 13:50:43.671619931 +0000 UTC m=+46.193762768,LastTimestamp:2025-12-13 13:50:54.005144392 +0000 UTC m=+56.527287232,Count:2,Type:Normal,EventTime:0001-0
1-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-205521,}"
	Dec 13 13:58:46 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:46.246082    1217 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T13:58:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T13:58:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T13:58:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T13:58:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58\\\",\\\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\\\"],\\\"sizeBytes\\\":27671920},{\\\"names\\\":[\\\"registry.k8
s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d\\\",\\\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\\\"],\\\"sizeBytes\\\":23121143},{\\\"names\\\":[\\\"registry.k8s.io/etcd:3.6.5-0\\\"],\\\"sizeBytes\\\":22869579},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6\\\",\\\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\\\"],\\\"sizeBytes\\\":17228488},{\\\"names\\\":[\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\"],\\\"sizeBytes\\\":9057171},{\\\"names\\\":[\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":317967}]}}\" for node \"kubernetes-upgrade-205521\": Patch \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-205521/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 13:58:47 kubernetes-upgrade-205521 kubelet[1217]: E1213 13:58:47.723442    1217 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 13 13:58:49 kubernetes-upgrade-205521 kubelet[1217]: I1213 13:58:49.559380    1217 kubelet.go:3323] "Trying to delete pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-205521" podUID="6d8de629-2498-48c0-b13e-f8cda6cd0bf8"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-205521 -n kubernetes-upgrade-205521
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-205521 -n kubernetes-upgrade-205521: exit status 2 (15.914721516s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-205521" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-205521" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-205521
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-205521: (2.289480591s)
--- FAIL: TestKubernetesUpgrade (599.37s)

                                                
                                    

Test pass (377/420)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.2
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 9.72
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 10.19
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.28
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
29 TestDownloadOnlyKic 0.41
30 TestBinaryMirror 0.84
31 TestOffline 58.97
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 122.77
38 TestAddons/serial/Volcano 40.98
40 TestAddons/serial/GCPAuth/Namespaces 0.11
41 TestAddons/serial/GCPAuth/FakeCredentials 9.46
44 TestAddons/parallel/Registry 14.98
45 TestAddons/parallel/RegistryCreds 0.66
46 TestAddons/parallel/Ingress 21.27
47 TestAddons/parallel/InspektorGadget 10.78
48 TestAddons/parallel/MetricsServer 5.65
50 TestAddons/parallel/CSI 31.79
51 TestAddons/parallel/Headlamp 18.49
52 TestAddons/parallel/CloudSpanner 5.52
54 TestAddons/parallel/NvidiaDevicePlugin 6.56
55 TestAddons/parallel/Yakd 10.66
56 TestAddons/parallel/AmdGpuDevicePlugin 6.5
57 TestAddons/StoppedEnableDisable 12.64
58 TestCertOptions 29.03
59 TestCertExpiration 214.39
61 TestForceSystemdFlag 23.09
62 TestForceSystemdEnv 38.83
67 TestErrorSpam/setup 19.17
68 TestErrorSpam/start 0.68
69 TestErrorSpam/status 0.96
70 TestErrorSpam/pause 1.46
71 TestErrorSpam/unpause 1.53
72 TestErrorSpam/stop 1.5
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 38.31
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 5.8
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.89
84 TestFunctional/serial/CacheCmd/cache/add_local 1.91
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 37.02
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.22
95 TestFunctional/serial/LogsFileCmd 1.23
96 TestFunctional/serial/InvalidService 4.54
98 TestFunctional/parallel/ConfigCmd 0.45
100 TestFunctional/parallel/DryRun 0.39
101 TestFunctional/parallel/InternationalLanguage 0.16
102 TestFunctional/parallel/StatusCmd 1.02
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 30.36
110 TestFunctional/parallel/SSHCmd 0.54
111 TestFunctional/parallel/CpCmd 1.76
112 TestFunctional/parallel/MySQL 23.58
113 TestFunctional/parallel/FileSync 0.29
114 TestFunctional/parallel/CertSync 1.68
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
122 TestFunctional/parallel/License 0.4
124 TestFunctional/parallel/Version/short 0.06
125 TestFunctional/parallel/Version/components 0.48
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.57
131 TestFunctional/parallel/ImageCommands/Setup 1.75
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.04
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.22
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
143 TestFunctional/parallel/ProfileCmd/profile_list 0.4
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
145 TestFunctional/parallel/MountCmd/any-port 6.99
146 TestFunctional/parallel/MountCmd/specific-port 1.86
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.89
149 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
150 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 6.2
153 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
154 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
158 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
159 TestFunctional/parallel/ServiceCmd/List 1.71
160 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 34.62
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 5.77
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.72
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.85
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.56
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 44.98
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.23
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.24
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.47
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.52
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.47
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.21
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.06
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 10.54
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 18.65
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.67
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.84
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 26.13
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.32
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.85
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.56
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.33
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.19
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 10.21
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.5
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.5
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.36
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.35
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.35
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.08
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.56
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.27
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.26
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.27
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.3
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 5.21
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.82
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.18
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.18
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.2
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.06
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.44
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.41
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 2.33
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 15.15
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.38
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.54
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.67
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.42
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.96
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.58
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 107.85
266 TestMultiControlPlane/serial/DeployApp 5.79
267 TestMultiControlPlane/serial/PingHostFromPods 1.19
268 TestMultiControlPlane/serial/AddWorkerNode 23.47
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
271 TestMultiControlPlane/serial/CopyFile 17.43
272 TestMultiControlPlane/serial/StopSecondaryNode 12.74
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.58
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 94.02
277 TestMultiControlPlane/serial/DeleteSecondaryNode 9.41
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
279 TestMultiControlPlane/serial/StopCluster 36.14
280 TestMultiControlPlane/serial/RestartCluster 57.3
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
282 TestMultiControlPlane/serial/AddSecondaryNode 72.8
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
288 TestJSONOutput/start/Command 39.71
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Command 0.75
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Command 0.59
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 5.88
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.24
313 TestKicCustomNetwork/create_custom_network 30.82
314 TestKicCustomNetwork/use_default_bridge_network 23.29
315 TestKicExistingNetwork 22.35
316 TestKicCustomSubnet 25.77
317 TestKicStaticIP 25.44
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 49.99
322 TestMountStart/serial/StartWithMountFirst 4.4
323 TestMountStart/serial/VerifyMountFirst 0.28
324 TestMountStart/serial/StartWithMountSecond 4.41
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.67
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 7.45
330 TestMountStart/serial/VerifyMountPostStop 0.27
333 TestMultiNode/serial/FreshStart2Nodes 63.25
334 TestMultiNode/serial/DeployApp2Nodes 4.9
335 TestMultiNode/serial/PingHostFrom2Pods 0.83
336 TestMultiNode/serial/AddNode 23.92
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.65
339 TestMultiNode/serial/CopyFile 9.85
340 TestMultiNode/serial/StopNode 2.24
341 TestMultiNode/serial/StartAfterStop 6.84
342 TestMultiNode/serial/RestartKeepsNodes 71.2
343 TestMultiNode/serial/DeleteNode 5.24
344 TestMultiNode/serial/StopMultiNode 24.04
345 TestMultiNode/serial/RestartMultiNode 44.77
346 TestMultiNode/serial/ValidateNameConflict 24.8
351 TestPreload 98.8
353 TestScheduledStopUnix 98.73
356 TestInsufficientStorage 11.56
357 TestRunningBinaryUpgrade 299.32
360 TestMissingContainerUpgrade 102.73
368 TestStoppedBinaryUpgrade/Setup 3.99
369 TestStoppedBinaryUpgrade/Upgrade 327.89
371 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
372 TestNoKubernetes/serial/StartWithK8s 26.21
373 TestNoKubernetes/serial/StartWithStopK8s 6.07
374 TestNoKubernetes/serial/Start 3.6
375 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
376 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
377 TestNoKubernetes/serial/ProfileList 48.45
378 TestNoKubernetes/serial/Stop 1.27
379 TestNoKubernetes/serial/StartNoArgs 6.66
380 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
382 TestPause/serial/Start 42.57
383 TestPause/serial/SecondStartNoReconfiguration 5.76
384 TestPause/serial/Pause 0.66
385 TestPause/serial/VerifyStatus 0.34
386 TestPause/serial/Unpause 0.62
387 TestPause/serial/PauseAgain 0.75
388 TestPause/serial/DeletePaused 2.73
389 TestPause/serial/VerifyDeletedResources 19.2
390 TestStoppedBinaryUpgrade/MinikubeLogs 2.76
398 TestNetworkPlugins/group/false 3.92
400 TestStartStop/group/old-k8s-version/serial/FirstStart 48.98
405 TestStartStop/group/no-preload/serial/FirstStart 48.28
406 TestStartStop/group/old-k8s-version/serial/DeployApp 9.25
407 TestStartStop/group/no-preload/serial/DeployApp 10.22
408 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.89
409 TestStartStop/group/old-k8s-version/serial/Stop 12.04
410 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.8
411 TestStartStop/group/no-preload/serial/Stop 12.1
412 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
413 TestStartStop/group/old-k8s-version/serial/SecondStart 44.81
414 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
415 TestStartStop/group/no-preload/serial/SecondStart 50.09
416 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
417 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
418 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
419 TestStartStop/group/old-k8s-version/serial/Pause 2.79
420 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
422 TestStartStop/group/embed-certs/serial/FirstStart 39.59
423 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
424 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
425 TestStartStop/group/no-preload/serial/Pause 2.82
427 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 38.09
428 TestStartStop/group/embed-certs/serial/DeployApp 9.3
430 TestStartStop/group/newest-cni/serial/FirstStart 22.42
431 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.87
432 TestStartStop/group/embed-certs/serial/Stop 12.13
433 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
434 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
435 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
436 TestStartStop/group/embed-certs/serial/SecondStart 49.89
437 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.1
438 TestStartStop/group/newest-cni/serial/DeployApp 0
439 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.78
440 TestStartStop/group/newest-cni/serial/Stop 1.32
441 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
442 TestStartStop/group/newest-cni/serial/SecondStart 11.78
443 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
444 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.89
445 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
446 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
447 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
448 TestStartStop/group/newest-cni/serial/Pause 3.38
449 TestNetworkPlugins/group/auto/Start 42.13
450 TestNetworkPlugins/group/kindnet/Start 38.73
451 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
452 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
453 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
454 TestStartStop/group/embed-certs/serial/Pause 3.18
455 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
456 TestNetworkPlugins/group/auto/KubeletFlags 0.32
457 TestNetworkPlugins/group/auto/NetCatPod 8.2
458 TestNetworkPlugins/group/flannel/Start 52.72
459 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.07
460 TestNetworkPlugins/group/auto/DNS 0.14
461 TestNetworkPlugins/group/auto/Localhost 0.12
462 TestNetworkPlugins/group/auto/HairPin 0.12
463 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
464 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
465 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.9
466 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
467 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
468 TestNetworkPlugins/group/enable-default-cni/Start 36.77
469 TestNetworkPlugins/group/kindnet/DNS 0.15
470 TestNetworkPlugins/group/kindnet/Localhost 0.14
471 TestNetworkPlugins/group/kindnet/HairPin 0.12
472 TestNetworkPlugins/group/bridge/Start 71.22
473 TestNetworkPlugins/group/custom-flannel/Start 53.63
474 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
475 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.19
476 TestNetworkPlugins/group/flannel/ControllerPod 6.01
477 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
478 TestNetworkPlugins/group/flannel/NetCatPod 9.28
479 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
480 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
481 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
482 TestNetworkPlugins/group/flannel/DNS 0.15
483 TestNetworkPlugins/group/flannel/Localhost 0.14
484 TestNetworkPlugins/group/flannel/HairPin 0.13
485 TestNetworkPlugins/group/calico/Start 51.82
486 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
487 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
488 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
489 TestNetworkPlugins/group/bridge/NetCatPod 9.2
490 TestNetworkPlugins/group/custom-flannel/DNS 0.18
491 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
492 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
493 TestNetworkPlugins/group/bridge/DNS 0.14
494 TestNetworkPlugins/group/bridge/Localhost 0.13
495 TestNetworkPlugins/group/bridge/HairPin 0.12
496 TestNetworkPlugins/group/calico/ControllerPod 6.01
497 TestNetworkPlugins/group/calico/KubeletFlags 0.29
498 TestNetworkPlugins/group/calico/NetCatPod 8.17
499 TestNetworkPlugins/group/calico/DNS 0.13
500 TestNetworkPlugins/group/calico/Localhost 0.11
501 TestNetworkPlugins/group/calico/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (12.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-653806 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-653806 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.195312781s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 13:04:51.286501  405531 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1213 13:04:51.286590  405531 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-653806
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-653806: exit status 85 (76.677225ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-653806 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-653806 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:04:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:04:39.147491  405544 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:04:39.147769  405544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:39.147780  405544 out.go:374] Setting ErrFile to fd 2...
	I1213 13:04:39.147784  405544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:39.147992  405544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	W1213 13:04:39.148118  405544 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22122-401936/.minikube/config/config.json: open /home/jenkins/minikube-integration/22122-401936/.minikube/config/config.json: no such file or directory
	I1213 13:04:39.148610  405544 out.go:368] Setting JSON to true
	I1213 13:04:39.149673  405544 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6422,"bootTime":1765624657,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:04:39.149731  405544 start.go:143] virtualization: kvm guest
	I1213 13:04:39.153165  405544 out.go:99] [download-only-653806] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1213 13:04:39.153300  405544 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 13:04:39.153399  405544 notify.go:221] Checking for updates...
	I1213 13:04:39.155767  405544 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:04:39.157398  405544 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:04:39.158911  405544 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:04:39.160179  405544 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:04:39.161449  405544 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 13:04:39.163684  405544 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 13:04:39.163966  405544 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:04:39.188292  405544 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:04:39.188431  405544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:39.244146  405544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-13 13:04:39.233330641 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:39.244296  405544 docker.go:319] overlay module found
	I1213 13:04:39.246233  405544 out.go:99] Using the docker driver based on user configuration
	I1213 13:04:39.246264  405544 start.go:309] selected driver: docker
	I1213 13:04:39.246272  405544 start.go:927] validating driver "docker" against <nil>
	I1213 13:04:39.246382  405544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:39.304666  405544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-13 13:04:39.294403512 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:39.304841  405544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:04:39.305339  405544 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1213 13:04:39.305513  405544 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:04:39.307486  405544 out.go:171] Using Docker driver with root privileges
	I1213 13:04:39.308795  405544 cni.go:84] Creating CNI manager for ""
	I1213 13:04:39.308880  405544 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:04:39.308892  405544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:04:39.308971  405544 start.go:353] cluster config:
	{Name:download-only-653806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-653806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:04:39.310469  405544 out.go:99] Starting "download-only-653806" primary control-plane node in "download-only-653806" cluster
	I1213 13:04:39.310490  405544 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 13:04:39.311869  405544 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:04:39.311909  405544 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 13:04:39.312008  405544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:04:39.329269  405544 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 13:04:39.329471  405544 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 13:04:39.329567  405544 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 13:04:39.649828  405544 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1213 13:04:39.649869  405544 cache.go:65] Caching tarball of preloaded images
	I1213 13:04:39.650066  405544 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 13:04:39.651895  405544 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 13:04:39.651918  405544 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1213 13:04:39.746363  405544 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1213 13:04:39.746534  405544 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1213 13:04:49.089340  405544 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	
	
	* The control-plane node download-only-653806 host does not exist
	  To start a cluster, run: "minikube start -p download-only-653806"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-653806
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-665508 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-665508 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.719354175s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 13:05:01.465459  405531 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1213 13:05:01.465548  405531 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-665508
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-665508: exit status 85 (77.783784ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-653806 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-653806 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-653806                                                                                                                                                               │ download-only-653806 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ -o=json --download-only -p download-only-665508 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-665508 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:04:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:04:51.800024  405929 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:04:51.800265  405929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:51.800273  405929 out.go:374] Setting ErrFile to fd 2...
	I1213 13:04:51.800278  405929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:51.800488  405929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:04:51.800949  405929 out.go:368] Setting JSON to true
	I1213 13:04:51.801860  405929 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6435,"bootTime":1765624657,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:04:51.801920  405929 start.go:143] virtualization: kvm guest
	I1213 13:04:51.803943  405929 out.go:99] [download-only-665508] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:04:51.804135  405929 notify.go:221] Checking for updates...
	I1213 13:04:51.805425  405929 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:04:51.806967  405929 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:04:51.808266  405929 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:04:51.809713  405929 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:04:51.810927  405929 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 13:04:51.813057  405929 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 13:04:51.813455  405929 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:04:51.840892  405929 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:04:51.841017  405929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:51.895274  405929 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 13:04:51.885453383 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:51.895409  405929 docker.go:319] overlay module found
	I1213 13:04:51.897196  405929 out.go:99] Using the docker driver based on user configuration
	I1213 13:04:51.897237  405929 start.go:309] selected driver: docker
	I1213 13:04:51.897243  405929 start.go:927] validating driver "docker" against <nil>
	I1213 13:04:51.897395  405929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:51.949334  405929 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 13:04:51.939945119 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:51.949490  405929 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:04:51.949970  405929 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1213 13:04:51.950114  405929 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:04:51.951747  405929 out.go:171] Using Docker driver with root privileges
	I1213 13:04:51.952777  405929 cni.go:84] Creating CNI manager for ""
	I1213 13:04:51.952844  405929 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:04:51.952855  405929 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:04:51.952907  405929 start.go:353] cluster config:
	{Name:download-only-665508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-665508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:04:51.954067  405929 out.go:99] Starting "download-only-665508" primary control-plane node in "download-only-665508" cluster
	I1213 13:04:51.954086  405929 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 13:04:51.955124  405929 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:04:51.955153  405929 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 13:04:51.955276  405929 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:04:51.972273  405929 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 13:04:51.972410  405929 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 13:04:51.972432  405929 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 13:04:51.972439  405929 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 13:04:51.972451  405929 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 13:04:52.307194  405929 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1213 13:04:52.307233  405929 cache.go:65] Caching tarball of preloaded images
	I1213 13:04:52.307514  405929 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 13:04:52.309430  405929 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1213 13:04:52.309462  405929 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1213 13:04:52.407812  405929 preload.go:295] Got checksum from GCS API "9dc714afc7e85c27d8bb9ef4a563e9e2"
	I1213 13:04:52.407885  405929 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:9dc714afc7e85c27d8bb9ef4a563e9e2 -> /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-665508 host does not exist
	  To start a cluster, run: "minikube start -p download-only-665508"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-665508
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (10.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-925441 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-925441 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.186393149s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (10.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 13:05:12.112794  405531 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 13:05:12.112838  405531 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-925441
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-925441: exit status 85 (278.807543ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-653806 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-653806 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-653806                                                                                                                                                                      │ download-only-653806 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ -o=json --download-only -p download-only-665508 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-665508 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │ 13 Dec 25 13:05 UTC │
	│ delete  │ -p download-only-665508                                                                                                                                                                      │ download-only-665508 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │ 13 Dec 25 13:05 UTC │
	│ start   │ -o=json --download-only -p download-only-925441 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-925441 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:05:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:05:01.979715  406297 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:05:01.979978  406297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:05:01.979989  406297 out.go:374] Setting ErrFile to fd 2...
	I1213 13:05:01.979994  406297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:05:01.980181  406297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:05:01.980672  406297 out.go:368] Setting JSON to true
	I1213 13:05:01.981618  406297 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6445,"bootTime":1765624657,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:05:01.981678  406297 start.go:143] virtualization: kvm guest
	I1213 13:05:01.983503  406297 out.go:99] [download-only-925441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:05:01.983700  406297 notify.go:221] Checking for updates...
	I1213 13:05:01.984957  406297 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:05:01.986415  406297 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:05:01.987862  406297 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:05:01.989282  406297 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:05:01.990561  406297 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 13:05:01.992872  406297 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 13:05:01.993145  406297 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:05:02.015781  406297 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:05:02.015919  406297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:05:02.071251  406297 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 13:05:02.061704841 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:05:02.071385  406297 docker.go:319] overlay module found
	I1213 13:05:02.072913  406297 out.go:99] Using the docker driver based on user configuration
	I1213 13:05:02.072955  406297 start.go:309] selected driver: docker
	I1213 13:05:02.072962  406297 start.go:927] validating driver "docker" against <nil>
	I1213 13:05:02.073053  406297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:05:02.128162  406297 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 13:05:02.118514325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:05:02.128383  406297 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:05:02.128938  406297 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1213 13:05:02.129099  406297 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:05:02.130934  406297 out.go:171] Using Docker driver with root privileges
	I1213 13:05:02.132306  406297 cni.go:84] Creating CNI manager for ""
	I1213 13:05:02.132406  406297 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 13:05:02.132424  406297 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:05:02.132497  406297 start.go:353] cluster config:
	{Name:download-only-925441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-925441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:05:02.133935  406297 out.go:99] Starting "download-only-925441" primary control-plane node in "download-only-925441" cluster
	I1213 13:05:02.133951  406297 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 13:05:02.135402  406297 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:05:02.135440  406297 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 13:05:02.135531  406297 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:05:02.153029  406297 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 13:05:02.153174  406297 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 13:05:02.153197  406297 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 13:05:02.153205  406297 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 13:05:02.153212  406297 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 13:05:02.486568  406297 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I1213 13:05:02.486626  406297 cache.go:65] Caching tarball of preloaded images
	I1213 13:05:02.486853  406297 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 13:05:02.488661  406297 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1213 13:05:02.488686  406297 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1213 13:05:02.583023  406297 preload.go:295] Got checksum from GCS API "467b4da05bb0ee7bf09bfad9829193ef"
	I1213 13:05:02.583071  406297 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:467b4da05bb0ee7bf09bfad9829193ef -> /home/jenkins/minikube-integration/22122-401936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I1213 13:05:11.060498  406297 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 13:05:11.060921  406297 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/download-only-925441/config.json ...
	I1213 13:05:11.060962  406297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/download-only-925441/config.json: {Name:mk0d4865f004ff11706da7f628c01b7168d779ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:11.061156  406297 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 13:05:11.061409  406297 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22122-401936/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl
	
	
	* The control-plane node download-only-925441 host does not exist
	  To start a cluster, run: "minikube start -p download-only-925441"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-925441
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-129344 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "download-docker-129344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-129344
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 13:05:13.630412  405531 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-887901 --alsologtostderr --binary-mirror http://127.0.0.1:45211 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-887901" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-887901
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (58.97s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-031782 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-031782 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (56.259817202s)
helpers_test.go:176: Cleaning up "offline-containerd-031782" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-031782
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-031782: (2.708572794s)
--- PASS: TestOffline (58.97s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-824997
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-824997: exit status 85 (70.730591ms)

                                                
                                                
-- stdout --
	* Profile "addons-824997" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-824997"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-824997
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-824997: exit status 85 (70.592768ms)

                                                
                                                
-- stdout --
	* Profile "addons-824997" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-824997"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (122.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-824997 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-824997 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m2.774143767s)
--- PASS: TestAddons/Setup (122.77s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.98s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 16.869912ms
addons_test.go:878: volcano-admission stabilized in 16.911625ms
addons_test.go:886: volcano-controller stabilized in 16.940949ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-nxpwg" [2c159a52-8863-412e-96e9-695de7722952] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003702546s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-m6xlr" [40c3fbba-548a-4d56-a841-b7e65e815c4b] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004264101s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-26lns" [44ecee92-fbd4-4685-8ad1-5b80a0a36d5d] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00425096s
addons_test.go:905: (dbg) Run:  kubectl --context addons-824997 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-824997 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-824997 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [367e759e-83f2-42a1-9035-1cb07b690f88] Pending
helpers_test.go:353: "test-job-nginx-0" [367e759e-83f2-42a1-9035-1cb07b690f88] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [367e759e-83f2-42a1-9035-1cb07b690f88] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.00362856s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-824997 addons disable volcano --alsologtostderr -v=1: (11.658410168s)
--- PASS: TestAddons/serial/Volcano (40.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-824997 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-824997 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-824997 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-824997 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [13bf900c-a6e0-4525-ab12-0eec78133355] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [13bf900c-a6e0-4525-ab12-0eec78133355] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004135007s
addons_test.go:696: (dbg) Run:  kubectl --context addons-824997 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-824997 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-824997 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.444727ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-hh7xr" [6e260024-f4a8-4789-a4ce-0e6144434b7f] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003377376s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-99sw8" [ae5fdbb1-065d-40ba-9f98-fb248ffde339] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003723779s
addons_test.go:394: (dbg) Run:  kubectl --context addons-824997 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-824997 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-824997 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.178992447s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 ip
2025/12/13 13:08:31 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.98s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.031149ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-824997
addons_test.go:334: (dbg) Run:  kubectl --context addons-824997 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-824997 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-824997 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-824997 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [4e4639c7-239a-4123-bbb0-89f66eac9682] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [4e4639c7-239a-4123-bbb0-89f66eac9682] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.002771957s
I1213 13:08:46.501532  405531 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-824997 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-824997 addons disable ingress-dns --alsologtostderr -v=1: (1.335771235s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-824997 addons disable ingress --alsologtostderr -v=1: (7.732624522s)
--- PASS: TestAddons/parallel/Ingress (21.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-pk68l" [c5672b1b-56d4-42d8-b155-e452ff616998] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003589208s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-824997 addons disable inspektor-gadget --alsologtostderr -v=1: (5.77328119s)
--- PASS: TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.079988ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-7q9sx" [ae8558d5-7777-4f0b-93db-322b4e89148f] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003247174s
addons_test.go:465: (dbg) Run:  kubectl --context addons-824997 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (31.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 13:08:32.324384  405531 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 13:08:32.327653  405531 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 13:08:32.327684  405531 kapi.go:107] duration metric: took 3.318157ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.332535ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-824997 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-824997 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [3f980f43-235e-41e1-8110-a54d4815a256] Pending
helpers_test.go:353: "task-pv-pod" [3f980f43-235e-41e1-8110-a54d4815a256] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [3f980f43-235e-41e1-8110-a54d4815a256] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00381555s
addons_test.go:574: (dbg) Run:  kubectl --context addons-824997 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-824997 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-824997 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-824997 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-824997 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-824997 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-824997 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-824997 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [1ada3299-3c72-44cd-9414-401869d6d98b] Pending
helpers_test.go:353: "task-pv-pod-restore" [1ada3299-3c72-44cd-9414-401869d6d98b] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004196194s
addons_test.go:616: (dbg) Run:  kubectl --context addons-824997 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-824997 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-824997 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-824997 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.532551951s)
--- PASS: TestAddons/parallel/CSI (31.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-824997 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-q5db7" [d641b92e-6e91-4d8f-b590-8f3dddfcaeb3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-q5db7" [d641b92e-6e91-4d8f-b590-8f3dddfcaeb3] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003909269s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-824997 addons disable headlamp --alsologtostderr -v=1: (5.683558507s)
--- PASS: TestAddons/parallel/Headlamp (18.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-5dx7s" [24503219-78bb-4103-a4d3-b4a6eaf7c0b9] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003439766s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-vq87l" [e2a48ba5-a70e-40a2-add1-029a7bf0ef4c] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003956541s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-4shkf" [a946b60c-cdff-4df5-aac9-17051509aa0b] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003512097s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-824997 addons disable yakd --alsologtostderr -v=1: (5.654628781s)
--- PASS: TestAddons/parallel/Yakd (10.66s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-dmbzs" [279d1498-a14f-4451-817b-f77e32c0940f] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003660567s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-824997 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.64s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-824997
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-824997: (12.349284387s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-824997
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-824997
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-824997
--- PASS: TestAddons/StoppedEnableDisable (12.64s)

                                                
                                    
x
+
TestCertOptions (29.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-652721 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-652721 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (25.828273046s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-652721 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-652721 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-652721 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-652721" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-652721
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-652721: (2.474628406s)
--- PASS: TestCertOptions (29.03s)

                                                
                                    
x
+
TestCertExpiration (214.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-913044 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-913044 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.919497753s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-913044 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-913044 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (4.972882341s)
helpers_test.go:176: Cleaning up "cert-expiration-913044" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-913044
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-913044: (2.493279529s)
--- PASS: TestCertExpiration (214.39s)

                                                
                                    
x
+
TestForceSystemdFlag (23.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-224314 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-224314 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (20.585511725s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-224314 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-224314" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-224314
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-224314: (2.16188672s)
--- PASS: TestForceSystemdFlag (23.09s)

                                                
                                    
x
+
TestForceSystemdEnv (38.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-062543 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-062543 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.732107819s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-062543 ssh "cat /etc/containerd/config.toml"
E1213 13:49:42.880528  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:176: Cleaning up "force-systemd-env-062543" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-062543
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-062543: (4.779590955s)
--- PASS: TestForceSystemdEnv (38.83s)

                                                
                                    
x
+
TestErrorSpam/setup (19.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-286744 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-286744 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-286744 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-286744 --driver=docker  --container-runtime=containerd: (19.17340142s)
--- PASS: TestErrorSpam/setup (19.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 stop: (1.288719439s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-286744 --log_dir /tmp/nospam-286744 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-401936/.minikube/files/etc/test/nested/copy/405531/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217219 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-217219 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (38.312770429s)
--- PASS: TestFunctional/serial/StartWithProxy (38.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 13:15:27.509883  405531 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217219 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-217219 --alsologtostderr -v=8: (5.7975443s)
functional_test.go:678: soft start took 5.798390819s for "functional-217219" cluster.
I1213 13:15:33.307955  405531 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (5.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-217219 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-217219 cache add registry.k8s.io/pause:3.3: (1.082513629s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-217219 /tmp/TestFunctionalserialCacheCmdcacheadd_local4129241083/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 cache add minikube-local-cache-test:functional-217219
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-217219 cache add minikube-local-cache-test:functional-217219: (1.560635784s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 cache delete minikube-local-cache-test:functional-217219
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-217219
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.486671ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 kubectl -- --context functional-217219 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-217219 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217219 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-217219 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.014570486s)
functional_test.go:776: restart took 37.014732123s for "functional-217219" cluster.
I1213 13:16:17.549620  405531 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (37.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-217219 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-217219 logs: (1.218583611s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 logs --file /tmp/TestFunctionalserialLogsFileCmd4139739538/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-217219 logs --file /tmp/TestFunctionalserialLogsFileCmd4139739538/001/logs.txt: (1.233640047s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-217219 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-217219
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-217219: exit status 115 (351.068833ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32544 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-217219 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-217219 delete -f testdata/invalidsvc.yaml: (1.022043579s)
--- PASS: TestFunctional/serial/InvalidService (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 config get cpus: exit status 14 (85.514689ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 config get cpus: exit status 14 (77.884668ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217219 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-217219 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (168.432265ms)

                                                
                                                
-- stdout --
	* [functional-217219] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:17:04.842914  457427 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:17:04.843021  457427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:17:04.843030  457427 out.go:374] Setting ErrFile to fd 2...
	I1213 13:17:04.843034  457427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:17:04.843239  457427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:17:04.843681  457427 out.go:368] Setting JSON to false
	I1213 13:17:04.844691  457427 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7168,"bootTime":1765624657,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:17:04.844751  457427 start.go:143] virtualization: kvm guest
	I1213 13:17:04.846955  457427 out.go:179] * [functional-217219] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:17:04.848332  457427 notify.go:221] Checking for updates...
	I1213 13:17:04.848372  457427 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:17:04.849604  457427 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:17:04.850930  457427 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:17:04.852094  457427 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:17:04.853265  457427 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:17:04.854358  457427 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:17:04.855869  457427 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:17:04.856645  457427 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:17:04.881180  457427 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:17:04.881288  457427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:17:04.937645  457427 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:17:04.926295472 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:17:04.937760  457427 docker.go:319] overlay module found
	I1213 13:17:04.939478  457427 out.go:179] * Using the docker driver based on existing profile
	I1213 13:17:04.940842  457427 start.go:309] selected driver: docker
	I1213 13:17:04.940857  457427 start.go:927] validating driver "docker" against &{Name:functional-217219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-217219 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:17:04.940947  457427 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:17:04.942820  457427 out.go:203] 
	W1213 13:17:04.943960  457427 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 13:17:04.945056  457427 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217219 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217219 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-217219 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (163.864713ms)

                                                
                                                
-- stdout --
	* [functional-217219] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:16:52.728849  453730 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:16:52.728952  453730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:16:52.728964  453730 out.go:374] Setting ErrFile to fd 2...
	I1213 13:16:52.728970  453730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:16:52.729263  453730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:16:52.729691  453730 out.go:368] Setting JSON to false
	I1213 13:16:52.730786  453730 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7156,"bootTime":1765624657,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:16:52.730849  453730 start.go:143] virtualization: kvm guest
	I1213 13:16:52.732892  453730 out.go:179] * [functional-217219] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 13:16:52.734240  453730 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:16:52.734252  453730 notify.go:221] Checking for updates...
	I1213 13:16:52.736464  453730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:16:52.738008  453730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:16:52.739225  453730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:16:52.740275  453730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:16:52.741494  453730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:16:52.743063  453730 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:16:52.743591  453730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:16:52.766434  453730 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:16:52.766544  453730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:16:52.821081  453730 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:16:52.811230818 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:16:52.821210  453730 docker.go:319] overlay module found
	I1213 13:16:52.822966  453730 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 13:16:52.824165  453730 start.go:309] selected driver: docker
	I1213 13:16:52.824184  453730 start.go:927] validating driver "docker" against &{Name:functional-217219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-217219 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:16:52.824343  453730 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:16:52.826089  453730 out.go:203] 
	W1213 13:16:52.827242  453730 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 13:16:52.828505  453730 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [06a070ca-d0c6-4877-b7be-38b40019056b] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002913737s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-217219 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-217219 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-217219 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-217219 apply -f testdata/storage-provisioner/pod.yaml
I1213 13:16:30.988545  405531 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [472c65e0-bece-4c6e-893c-88c5ac4c9dcc] Pending
helpers_test.go:353: "sp-pod" [472c65e0-bece-4c6e-893c-88c5ac4c9dcc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [472c65e0-bece-4c6e-893c-88c5ac4c9dcc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.003711602s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-217219 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-217219 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-217219 apply -f testdata/storage-provisioner/pod.yaml
I1213 13:16:47.800930  405531 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7b795015-67c2-478d-9955-9144a43d1cf2] Pending
helpers_test.go:353: "sp-pod" [7b795015-67c2-478d-9955-9144a43d1cf2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.072364844s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-217219 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh -n functional-217219 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 cp functional-217219:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3946226220/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh -n functional-217219 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh -n functional-217219 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-217219 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-shvdj" [57d68ac2-1a27-4c6d-8832-be16dfc85bd8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-shvdj" [57d68ac2-1a27-4c6d-8832-be16dfc85bd8] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.004155454s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-217219 exec mysql-6bcdcbc558-shvdj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-217219 exec mysql-6bcdcbc558-shvdj -- mysql -ppassword -e "show databases;": exit status 1 (165.036113ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:46.732021  405531 retry.go:31] will retry after 558.343814ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-217219 exec mysql-6bcdcbc558-shvdj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-217219 exec mysql-6bcdcbc558-shvdj -- mysql -ppassword -e "show databases;": exit status 1 (152.820065ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:47.444166  405531 retry.go:31] will retry after 1.754403004s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-217219 exec mysql-6bcdcbc558-shvdj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-217219 exec mysql-6bcdcbc558-shvdj -- mysql -ppassword -e "show databases;": exit status 1 (106.489306ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:49.306214  405531 retry.go:31] will retry after 2.557874332s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-217219 exec mysql-6bcdcbc558-shvdj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/405531/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo cat /etc/test/nested/copy/405531/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/405531.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo cat /etc/ssl/certs/405531.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/405531.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo cat /usr/share/ca-certificates/405531.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4055312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo cat /etc/ssl/certs/4055312.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4055312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo cat /usr/share/ca-certificates/4055312.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-217219 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 ssh "sudo systemctl is-active docker": exit status 1 (293.593046ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 ssh "sudo systemctl is-active crio": exit status 1 (297.499105ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217219 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-217219
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-217219
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-217219 image ls --format short --alsologtostderr:
I1213 13:17:12.990670  458588 out.go:360] Setting OutFile to fd 1 ...
I1213 13:17:12.990789  458588 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:17:12.990801  458588 out.go:374] Setting ErrFile to fd 2...
I1213 13:17:12.990813  458588 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:17:12.991030  458588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:17:12.991611  458588 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:17:12.991701  458588 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:17:12.992140  458588 cli_runner.go:164] Run: docker container inspect functional-217219 --format={{.State.Status}}
I1213 13:17:13.010585  458588 ssh_runner.go:195] Run: systemctl --version
I1213 13:17:13.010635  458588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217219
I1213 13:17:13.028530  458588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-217219/id_rsa Username:docker}
I1213 13:17:13.126453  458588 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217219 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-217219  │ sha256:ac4d0f │ 992B   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ localhost/my-image                          │ functional-217219  │ sha256:e82e02 │ 775kB  │
│ public.ecr.aws/docker/library/mysql         │ 8.4                │ sha256:20d0be │ 233MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:a5f569 │ 27.1MB │
│ docker.io/kicbase/echo-server               │ functional-217219  │ sha256:9056ab │ 2.37MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:8aa150 │ 26MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:88320b │ 17.4MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:a236f8 │ 23MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:01e8ba │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-217219 image ls --format table --alsologtostderr:
I1213 13:17:17.232024  459107 out.go:360] Setting OutFile to fd 1 ...
I1213 13:17:17.232144  459107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:17:17.232153  459107 out.go:374] Setting ErrFile to fd 2...
I1213 13:17:17.232157  459107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:17:17.232369  459107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:17:17.232944  459107 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:17:17.233029  459107 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:17:17.233525  459107 cli_runner.go:164] Run: docker container inspect functional-217219 --format={{.State.Status}}
I1213 13:17:17.252411  459107 ssh_runner.go:195] Run: systemctl --version
I1213 13:17:17.252478  459107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217219
I1213 13:17:17.270472  459107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-217219/id_rsa Username:docker}
I1213 13:17:17.365189  459107 ssh_runner.go:195] Run: sudo crictl images --output json
E1213 13:17:17.852755  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:17.859146  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:17.870514  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:17.891915  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:17.933348  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:18.014828  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:18.176401  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:18.498708  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:19.140625  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:20.422165  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:22.983693  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:28.105251  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:38.346780  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:58.829004  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:39.790717  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:01.713000  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217219 image ls --format json --alsologtostderr:
[{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-217219"],"size":"2372971"},{"id":"sha256:ac4d0f6dfad128f18b05ae908ff73d727cacd9fa107c9a3831ff97e56079ad76","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-217219"],"size":"992"},{"id":"sha256:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"233030909"},{"id":"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"re
poTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"22818657"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikub
e/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22995319"},{"id":"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c466046447
2045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"25963482"},{"id":"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"17382272"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:e82e027f8ad77ba6e9cb10ca996ba29fa87f47d4c68183fa750e32bee2cb9194","repoDigests":[],"repoTags":["localhost/my-image:functional-217219"],"size":"774888"},{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests
":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"27060130"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-217219 image ls --format json --alsologtostderr:
I1213 13:17:17.008858  459051 out.go:360] Setting OutFile to fd 1 ...
I1213 13:17:17.008977  459051 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:17:17.008986  459051 out.go:374] Setting ErrFile to fd 2...
I1213 13:17:17.008991  459051 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:17:17.009227  459051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:17:17.009870  459051 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:17:17.009978  459051 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:17:17.010982  459051 cli_runner.go:164] Run: docker container inspect functional-217219 --format={{.State.Status}}
I1213 13:17:17.029764  459051 ssh_runner.go:195] Run: systemctl --version
I1213 13:17:17.029817  459051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217219
I1213 13:17:17.046891  459051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-217219/id_rsa Username:docker}
I1213 13:17:17.142080  459051 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217219 image ls --format yaml --alsologtostderr:
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "22818657"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-217219
size: "2372971"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:ac4d0f6dfad128f18b05ae908ff73d727cacd9fa107c9a3831ff97e56079ad76
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-217219
size: "992"
- id: sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "27060130"
- id: sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "25963482"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:e82e027f8ad77ba6e9cb10ca996ba29fa87f47d4c68183fa750e32bee2cb9194
repoDigests: []
repoTags:
- localhost/my-image:functional-217219
size: "774888"
- id: sha256:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22995319"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "17382272"
- id: sha256:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "233030909"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-217219 image ls --format yaml --alsologtostderr:
I1213 13:17:16.785067  458996 out.go:360] Setting OutFile to fd 1 ...
I1213 13:17:16.785309  458996 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:17:16.785328  458996 out.go:374] Setting ErrFile to fd 2...
I1213 13:17:16.785333  458996 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:17:16.785547  458996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:17:16.786059  458996 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:17:16.786139  458996 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:17:16.786565  458996 cli_runner.go:164] Run: docker container inspect functional-217219 --format={{.State.Status}}
I1213 13:17:16.805074  458996 ssh_runner.go:195] Run: systemctl --version
I1213 13:17:16.805122  458996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217219
I1213 13:17:16.822592  458996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-217219/id_rsa Username:docker}
I1213 13:17:16.917006  458996 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 ssh pgrep buildkitd: exit status 1 (275.695146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image build -t localhost/my-image:functional-217219 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-217219 image build -t localhost/my-image:functional-217219 testdata/build --alsologtostderr: (3.069086338s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-217219 image build -t localhost/my-image:functional-217219 testdata/build --alsologtostderr:
I1213 13:17:13.493626  458748 out.go:360] Setting OutFile to fd 1 ...
I1213 13:17:13.493750  458748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:17:13.493762  458748 out.go:374] Setting ErrFile to fd 2...
I1213 13:17:13.493768  458748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:17:13.493997  458748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:17:13.494540  458748 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:17:13.495159  458748 config.go:182] Loaded profile config "functional-217219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 13:17:13.495614  458748 cli_runner.go:164] Run: docker container inspect functional-217219 --format={{.State.Status}}
I1213 13:17:13.514901  458748 ssh_runner.go:195] Run: systemctl --version
I1213 13:17:13.514966  458748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217219
I1213 13:17:13.532217  458748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-217219/id_rsa Username:docker}
I1213 13:17:13.627995  458748 build_images.go:162] Building image from path: /tmp/build.151836334.tar
I1213 13:17:13.628065  458748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 13:17:13.636585  458748 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.151836334.tar
I1213 13:17:13.640534  458748 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.151836334.tar: stat -c "%s %y" /var/lib/minikube/build/build.151836334.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.151836334.tar': No such file or directory
I1213 13:17:13.640564  458748 ssh_runner.go:362] scp /tmp/build.151836334.tar --> /var/lib/minikube/build/build.151836334.tar (3072 bytes)
I1213 13:17:13.659186  458748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.151836334
I1213 13:17:13.667194  458748 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.151836334 -xf /var/lib/minikube/build/build.151836334.tar
I1213 13:17:13.675264  458748 containerd.go:394] Building image: /var/lib/minikube/build/build.151836334
I1213 13:17:13.675407  458748 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.151836334 --local dockerfile=/var/lib/minikube/build/build.151836334 --output type=image,name=localhost/my-image:functional-217219
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.8s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.8s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:3476e5b2b4cd07042b4f71919159a110d18d8d0a2b1657ef6341a033163871de done
#8 exporting config sha256:e82e027f8ad77ba6e9cb10ca996ba29fa87f47d4c68183fa750e32bee2cb9194
#8 exporting config sha256:e82e027f8ad77ba6e9cb10ca996ba29fa87f47d4c68183fa750e32bee2cb9194 done
#8 naming to localhost/my-image:functional-217219 done
#8 DONE 0.1s
I1213 13:17:16.478552  458748 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.151836334 --local dockerfile=/var/lib/minikube/build/build.151836334 --output type=image,name=localhost/my-image:functional-217219: (2.803091884s)
I1213 13:17:16.478664  458748 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.151836334
I1213 13:17:16.488597  458748 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.151836334.tar
I1213 13:17:16.496463  458748 build_images.go:218] Built localhost/my-image:functional-217219 from /tmp/build.151836334.tar
I1213 13:17:16.496492  458748 build_images.go:134] succeeded building to: functional-217219
I1213 13:17:16.496497  458748 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.729011061s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-217219
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image load --daemon kicbase/echo-server:functional-217219 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image load --daemon kicbase/echo-server:functional-217219 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-217219
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image load --daemon kicbase/echo-server:functional-217219 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-217219 image load --daemon kicbase/echo-server:functional-217219 --alsologtostderr: (1.043072468s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image save kicbase/echo-server:functional-217219 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image rm kicbase/echo-server:functional-217219 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-217219
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 image save --daemon kicbase/echo-server:functional-217219 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-217219
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "335.435224ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "62.989057ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "333.396983ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.348389ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdany-port411870809/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765631814046455428" to /tmp/TestFunctionalparallelMountCmdany-port411870809/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765631814046455428" to /tmp/TestFunctionalparallelMountCmdany-port411870809/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765631814046455428" to /tmp/TestFunctionalparallelMountCmdany-port411870809/001/test-1765631814046455428
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.743078ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:16:54.327542  405531 retry.go:31] will retry after 705.743625ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 13:16 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 13:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 13:16 test-1765631814046455428
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh cat /mount-9p/test-1765631814046455428
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-217219 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [c537f9a5-5278-41ee-99f8-bec0cc62d77b] Pending
helpers_test.go:353: "busybox-mount" [c537f9a5-5278-41ee-99f8-bec0cc62d77b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [c537f9a5-5278-41ee-99f8-bec0cc62d77b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [c537f9a5-5278-41ee-99f8-bec0cc62d77b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004044757s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-217219 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdany-port411870809/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdspecific-port1171935147/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.884725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:17:01.320936  405531 retry.go:31] will retry after 555.9098ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdspecific-port1171935147/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 ssh "sudo umount -f /mount-9p": exit status 1 (269.468017ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-217219 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdspecific-port1171935147/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821121883/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821121883/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821121883/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T" /mount1: exit status 1 (336.319699ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:17:03.235623  405531 retry.go:31] will retry after 658.236416ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-217219 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821121883/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821121883/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821121883/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-217219 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-217219 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-217219 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 457839: os: process already finished
helpers_test.go:520: unable to terminate pid 457650: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-217219 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-217219 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (6.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-217219 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [2d24341b-a63c-4617-a687-613e5de69f74] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [2d24341b-a63c-4617-a687-613e5de69f74] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 6.002770215s
I1213 13:17:11.777662  405531 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (6.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-217219 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.248.124 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-217219 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-217219 service list: (1.706324392s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-217219 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-217219 service list -o json: (1.706444862s)
functional_test.go:1504: Took "1.706549309s" to run "out/minikube-linux-amd64 -p functional-217219 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-217219
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-217219
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-217219
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-401936/.minikube/files/etc/test/nested/copy/405531/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (34.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017456 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-017456 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (34.619734059s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (34.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (5.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 13:27:14.838561  405531 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017456 --alsologtostderr -v=8
E1213 13:27:17.853517  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-017456 --alsologtostderr -v=8: (5.770937868s)
functional_test.go:678: soft start took 5.771324502s for "functional-017456" cluster.
I1213 13:27:20.609862  405531 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (5.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-017456 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1806172081/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 cache add minikube-local-cache-test:functional-017456
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-017456 cache add minikube-local-cache-test:functional-017456: (1.56473388s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 cache delete minikube-local-cache-test:functional-017456
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-017456
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017456 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.674822ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 kubectl -- --context functional-017456 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-017456 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (44.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017456 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-017456 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.976900971s)
functional_test.go:776: restart took 44.97705192s for "functional-017456" cluster.
I1213 13:28:12.616866  405531 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (44.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-017456 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-017456 logs: (1.227160745s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2377933097/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-017456 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2377933097/001/logs.txt: (1.241381334s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-017456 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-017456
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-017456: exit status 115 (351.240879ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30726 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-017456 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017456 config get cpus: exit status 14 (103.240806ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017456 config get cpus: exit status 14 (91.818017ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-017456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (201.011958ms)

                                                
                                                
-- stdout --
	* [functional-017456] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:28:38.968293  480415 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:28:38.968575  480415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:28:38.968586  480415 out.go:374] Setting ErrFile to fd 2...
	I1213 13:28:38.968590  480415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:28:38.968834  480415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:28:38.969360  480415 out.go:368] Setting JSON to false
	I1213 13:28:38.970444  480415 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7862,"bootTime":1765624657,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:28:38.970505  480415 start.go:143] virtualization: kvm guest
	I1213 13:28:38.973070  480415 out.go:179] * [functional-017456] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:28:38.974948  480415 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:28:38.974965  480415 notify.go:221] Checking for updates...
	I1213 13:28:38.977656  480415 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:28:38.979015  480415 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:28:38.982898  480415 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:28:38.984184  480415 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:28:38.985452  480415 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:28:38.987544  480415 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 13:28:38.988272  480415 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:28:39.015614  480415 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:28:39.015718  480415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:28:39.082336  480415 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:59 SystemTime:2025-12-13 13:28:39.071119602 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:28:39.082482  480415 docker.go:319] overlay module found
	I1213 13:28:39.084518  480415 out.go:179] * Using the docker driver based on existing profile
	I1213 13:28:39.085836  480415 start.go:309] selected driver: docker
	I1213 13:28:39.085867  480415 start.go:927] validating driver "docker" against &{Name:functional-017456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-017456 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:28:39.085991  480415 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:28:39.087950  480415 out.go:203] 
	W1213 13:28:39.089310  480415 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 13:28:39.090739  480415 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017456 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-017456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (210.788235ms)

                                                
                                                
-- stdout --
	* [functional-017456] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:28:39.017933  480444 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:28:39.018063  480444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:28:39.018073  480444 out.go:374] Setting ErrFile to fd 2...
	I1213 13:28:39.018080  480444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:28:39.018439  480444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:28:39.021753  480444 out.go:368] Setting JSON to false
	I1213 13:28:39.023039  480444 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7862,"bootTime":1765624657,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:28:39.023125  480444 start.go:143] virtualization: kvm guest
	I1213 13:28:39.025377  480444 out.go:179] * [functional-017456] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 13:28:39.026807  480444 notify.go:221] Checking for updates...
	I1213 13:28:39.026815  480444 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:28:39.032010  480444 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:28:39.034051  480444 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:28:39.036147  480444 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:28:39.041987  480444 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:28:39.045163  480444 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:28:39.047428  480444 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 13:28:39.048340  480444 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:28:39.077285  480444 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:28:39.077441  480444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:28:39.142566  480444 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:28:39.131036892 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:28:39.142727  480444 docker.go:319] overlay module found
	I1213 13:28:39.144933  480444 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 13:28:39.146332  480444 start.go:309] selected driver: docker
	I1213 13:28:39.146350  480444 start.go:927] validating driver "docker" against &{Name:functional-017456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-017456 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:28:39.146465  480444 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:28:39.148970  480444 out.go:203] 
	W1213 13:28:39.150416  480444 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 13:28:39.151742  480444 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-017456 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-017456 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-4v4lk" [0d12b63c-7070-486a-a1f4-7383f8078677] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-4v4lk" [0d12b63c-7070-486a-a1f4-7383f8078677] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003329184s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32694
functional_test.go:1680: http://192.168.49.2:32694: success! body:
Request served by hello-node-connect-9f67c86d4-4v4lk

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32694
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (18.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [8f419dbd-08d0-4837-bda7-d3a10381124d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003470239s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-017456 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-017456 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-017456 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-017456 apply -f testdata/storage-provisioner/pod.yaml
I1213 13:28:25.661397  405531 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [6142876a-2aec-4b91-8afd-7b2cc587252b] Pending
helpers_test.go:353: "sp-pod" [6142876a-2aec-4b91-8afd-7b2cc587252b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.0039092s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-017456 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-017456 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-017456 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [22be6582-22f1-4387-9c48-5c21f5449d4d] Pending
helpers_test.go:353: "sp-pod" [22be6582-22f1-4387-9c48-5c21f5449d4d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004524843s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-017456 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (18.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh -n functional-017456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 cp functional-017456:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3097217647/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh -n functional-017456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh -n functional-017456 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (26.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-017456 replace --force -f testdata/mysql.yaml
I1213 13:28:32.819257  405531 detect.go:223] nested VM detected
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-5ppq4" [9e437e62-6eb7-4bfc-9c15-50cd5c54ca27] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-5ppq4" [9e437e62-6eb7-4bfc-9c15-50cd5c54ca27] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 17.004315439s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-017456 exec mysql-7d7b65bc95-5ppq4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-017456 exec mysql-7d7b65bc95-5ppq4 -- mysql -ppassword -e "show databases;": exit status 1 (130.159076ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:28:50.072236  405531 retry.go:31] will retry after 740.92049ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-017456 exec mysql-7d7b65bc95-5ppq4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-017456 exec mysql-7d7b65bc95-5ppq4 -- mysql -ppassword -e "show databases;": exit status 1 (109.640115ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:28:50.923554  405531 retry.go:31] will retry after 947.127715ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-017456 exec mysql-7d7b65bc95-5ppq4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-017456 exec mysql-7d7b65bc95-5ppq4 -- mysql -ppassword -e "show databases;": exit status 1 (146.389368ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:28:52.017422  405531 retry.go:31] will retry after 3.062909991s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-017456 exec mysql-7d7b65bc95-5ppq4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-017456 exec mysql-7d7b65bc95-5ppq4 -- mysql -ppassword -e "show databases;": exit status 1 (114.089834ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:28:55.195650  405531 retry.go:31] will retry after 3.581869462s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-017456 exec mysql-7d7b65bc95-5ppq4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (26.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/405531/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo cat /etc/test/nested/copy/405531/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/405531.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo cat /etc/ssl/certs/405531.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/405531.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo cat /usr/share/ca-certificates/405531.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4055312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo cat /etc/ssl/certs/4055312.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4055312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo cat /usr/share/ca-certificates/4055312.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-017456 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017456 ssh "sudo systemctl is-active docker": exit status 1 (283.425742ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017456 ssh "sudo systemctl is-active crio": exit status 1 (278.013722ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-017456 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-017456 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-7snlj" [64c9cf92-294c-45ae-9a2c-6b7b3a355359] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-7snlj" [64c9cf92-294c-45ae-9a2c-6b7b3a355359] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003953956s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-017456 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-017456 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-017456 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-017456 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 475026: os: process already finished
helpers_test.go:526: unable to kill pid 474715: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-017456 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-017456 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [62eff38e-6d56-4278-91ed-6d6a0f30b89b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [62eff38e-6d56-4278-91ed-6d6a0f30b89b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00341582s
I1213 13:28:30.893640  405531 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 service list -o json
functional_test.go:1504: Took "501.449606ms" to run "out/minikube-linux-amd64 -p functional-017456 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30543
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30543
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-017456 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.129.128 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-017456 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-017456 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-017456
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-017456
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-017456 image ls --format short --alsologtostderr:
I1213 13:28:40.643066  481332 out.go:360] Setting OutFile to fd 1 ...
I1213 13:28:40.643383  481332 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:40.643398  481332 out.go:374] Setting ErrFile to fd 2...
I1213 13:28:40.643405  481332 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:40.643756  481332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:28:40.644589  481332 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:40.644730  481332 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:40.645479  481332 cli_runner.go:164] Run: docker container inspect functional-017456 --format={{.State.Status}}
I1213 13:28:40.670359  481332 ssh_runner.go:195] Run: systemctl --version
I1213 13:28:40.670420  481332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-017456
I1213 13:28:40.697008  481332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-017456/id_rsa Username:docker}
I1213 13:28:40.799203  481332 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-017456 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:8a4ded │ 25.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:7bb621 │ 17.2MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:aa5e3e │ 23.6MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:aa9d02 │ 27.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:45f3cc │ 23.1MB │
│ docker.io/kicbase/echo-server               │ functional-017456  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-017456  │ sha256:ac4d0f │ 992B   │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:a236f8 │ 23MB   │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-017456 image ls --format table --alsologtostderr:
I1213 13:28:43.368818  482463 out.go:360] Setting OutFile to fd 1 ...
I1213 13:28:43.368954  482463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:43.368966  482463 out.go:374] Setting ErrFile to fd 2...
I1213 13:28:43.368973  482463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:43.369301  482463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:28:43.370125  482463 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:43.370261  482463 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:43.370905  482463 cli_runner.go:164] Run: docker container inspect functional-017456 --format={{.State.Status}}
I1213 13:28:43.392471  482463 ssh_runner.go:195] Run: systemctl --version
I1213 13:28:43.392536  482463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-017456
I1213 13:28:43.414035  482463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-017456/id_rsa Username:docker}
I1213 13:28:43.517410  482463 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-017456 image ls --format json --alsologtostderr:
[{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-017456"],"size":"2372971"},{"id":"sha256:ac4d0f6dfad128f18b05ae908ff73d727cacd9fa107c9a3831ff97e56079ad76","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-017456"],"size":"992"},{"id":"sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"23121143"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:a236f84b9d5d27fe4bf2bab075
01cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22995319"},{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"27671920"},{"id":"sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"s
ize":"17228488"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae1
4e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"23553139"},{"id":"sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"25786942"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-017456 image ls --format json --alsologtostderr:
I1213 13:28:40.915120  481568 out.go:360] Setting OutFile to fd 1 ...
I1213 13:28:40.915438  481568 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:40.915452  481568 out.go:374] Setting ErrFile to fd 2...
I1213 13:28:40.915458  481568 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:40.915754  481568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:28:40.916503  481568 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:40.916640  481568 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:40.917226  481568 cli_runner.go:164] Run: docker container inspect functional-017456 --format={{.State.Status}}
I1213 13:28:40.940312  481568 ssh_runner.go:195] Run: systemctl --version
I1213 13:28:40.940392  481568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-017456
I1213 13:28:40.962078  481568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-017456/id_rsa Username:docker}
I1213 13:28:41.065931  481568 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-017456 image ls --format yaml --alsologtostderr:
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-017456
size: "2372971"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "23553139"
- id: sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "23121143"
- id: sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "17228488"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "25786942"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:ac4d0f6dfad128f18b05ae908ff73d727cacd9fa107c9a3831ff97e56079ad76
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-017456
size: "992"
- id: sha256:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22995319"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "27671920"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-017456 image ls --format yaml --alsologtostderr:
I1213 13:28:43.629626  482513 out.go:360] Setting OutFile to fd 1 ...
I1213 13:28:43.629728  482513 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:43.629739  482513 out.go:374] Setting ErrFile to fd 2...
I1213 13:28:43.629746  482513 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:43.629949  482513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:28:43.630566  482513 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:43.630655  482513 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:43.631118  482513 cli_runner.go:164] Run: docker container inspect functional-017456 --format={{.State.Status}}
I1213 13:28:43.653524  482513 ssh_runner.go:195] Run: systemctl --version
I1213 13:28:43.653576  482513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-017456
I1213 13:28:43.677035  482513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-017456/id_rsa Username:docker}
I1213 13:28:43.779491  482513 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017456 ssh pgrep buildkitd: exit status 1 (309.464654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image build -t localhost/my-image:functional-017456 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-017456 image build -t localhost/my-image:functional-017456 testdata/build --alsologtostderr: (4.663011169s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-017456 image build -t localhost/my-image:functional-017456 testdata/build --alsologtostderr:
I1213 13:28:41.496254  481931 out.go:360] Setting OutFile to fd 1 ...
I1213 13:28:41.496532  481931 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:41.496544  481931 out.go:374] Setting ErrFile to fd 2...
I1213 13:28:41.496550  481931 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:28:41.496762  481931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
I1213 13:28:41.497406  481931 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:41.498258  481931 config.go:182] Loaded profile config "functional-017456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 13:28:41.499030  481931 cli_runner.go:164] Run: docker container inspect functional-017456 --format={{.State.Status}}
I1213 13:28:41.521132  481931 ssh_runner.go:195] Run: systemctl --version
I1213 13:28:41.521181  481931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-017456
I1213 13:28:41.542978  481931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/functional-017456/id_rsa Username:docker}
I1213 13:28:41.645147  481931 build_images.go:162] Building image from path: /tmp/build.2026109689.tar
I1213 13:28:41.645215  481931 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 13:28:41.654999  481931 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2026109689.tar
I1213 13:28:41.658981  481931 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2026109689.tar: stat -c "%s %y" /var/lib/minikube/build/build.2026109689.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2026109689.tar': No such file or directory
I1213 13:28:41.659009  481931 ssh_runner.go:362] scp /tmp/build.2026109689.tar --> /var/lib/minikube/build/build.2026109689.tar (3072 bytes)
I1213 13:28:41.679968  481931 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2026109689
I1213 13:28:41.690455  481931 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2026109689 -xf /var/lib/minikube/build/build.2026109689.tar
I1213 13:28:41.699760  481931 containerd.go:394] Building image: /var/lib/minikube/build/build.2026109689
I1213 13:28:41.699855  481931 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2026109689 --local dockerfile=/var/lib/minikube/build/build.2026109689 --output type=image,name=localhost/my-image:functional-017456
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context:
#3 transferring context: 2B done
#3 DONE 0.2s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.4s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.4s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:564122fa87df6f4914feedf82df852bcd85d4f062122a61aff671e5efb388c74 done
#8 exporting config sha256:ea0171a333ddd602156c348424d28817b71ded4e4abef3b554912cfb1ceb5482 0.0s done
#8 naming to localhost/my-image:functional-017456 done
#8 DONE 0.1s
I1213 13:28:46.058482  481931 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2026109689 --local dockerfile=/var/lib/minikube/build/build.2026109689 --output type=image,name=localhost/my-image:functional-017456: (4.358595009s)
I1213 13:28:46.058542  481931 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2026109689
I1213 13:28:46.067953  481931 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2026109689.tar
I1213 13:28:46.076212  481931 build_images.go:218] Built localhost/my-image:functional-017456 from /tmp/build.2026109689.tar
I1213 13:28:46.076262  481931 build_images.go:134] succeeded building to: functional-017456
I1213 13:28:46.076274  481931 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-017456
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image load --daemon kicbase/echo-server:functional-017456 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image load --daemon kicbase/echo-server:functional-017456 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "346.445295ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "62.400771ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (2.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-017456
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image load --daemon kicbase/echo-server:functional-017456 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-017456 image load --daemon kicbase/echo-server:functional-017456 --alsologtostderr: (1.258989187s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (2.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "340.529361ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.452947ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (15.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1381143754/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765632515344500749" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1381143754/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765632515344500749" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1381143754/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765632515344500749" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1381143754/001/test-1765632515344500749
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (329.688889ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:28:35.674544  405531 retry.go:31] will retry after 692.021687ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 13:28 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 13:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 13:28 test-1765632515344500749
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh cat /mount-9p/test-1765632515344500749
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-017456 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [4c7aad80-3395-4f9b-a96a-c629d22bdf94] Pending
helpers_test.go:353: "busybox-mount" [4c7aad80-3395-4f9b-a96a-c629d22bdf94] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [4c7aad80-3395-4f9b-a96a-c629d22bdf94] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [4c7aad80-3395-4f9b-a96a-c629d22bdf94] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.004171724s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-017456 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1381143754/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (15.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image save kicbase/echo-server:functional-017456 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image rm kicbase/echo-server:functional-017456 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-017456
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 image save --daemon kicbase/echo-server:functional-017456 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-017456
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116406898/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (298.344664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:28:50.795271  405531 retry.go:31] will retry after 587.511948ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116406898/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017456 ssh "sudo umount -f /mount-9p": exit status 1 (282.457284ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-017456 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116406898/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3112510212/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3112510212/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3112510212/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T" /mount1: exit status 1 (359.528249ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:28:52.815404  405531 retry.go:31] will retry after 292.500305ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-017456 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-017456 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3112510212/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3112510212/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-017456 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3112510212/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-017456
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-017456
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-017456
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (107.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m47.128368869s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (107.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 kubectl -- rollout status deployment/busybox: (3.652522255s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-8fgl9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-d9bfk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-tz58g -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-8fgl9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-d9bfk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-tz58g -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-8fgl9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-d9bfk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-tz58g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-8fgl9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-8fgl9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-d9bfk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-d9bfk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-tz58g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 kubectl -- exec busybox-7b57f96db7-tz58g -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 node add --alsologtostderr -v 5: (22.570230338s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-138563 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp testdata/cp-test.txt ha-138563:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970770729/001/cp-test_ha-138563.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563:/home/docker/cp-test.txt ha-138563-m02:/home/docker/cp-test_ha-138563_ha-138563-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m02 "sudo cat /home/docker/cp-test_ha-138563_ha-138563-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563:/home/docker/cp-test.txt ha-138563-m03:/home/docker/cp-test_ha-138563_ha-138563-m03.txt
E1213 13:31:24.614239  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:31:24.620635  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:31:24.632059  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:31:24.653439  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:31:24.694873  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:31:24.776387  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563 "sudo cat /home/docker/cp-test.txt"
E1213 13:31:24.938309  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m03 "sudo cat /home/docker/cp-test_ha-138563_ha-138563-m03.txt"
E1213 13:31:25.259605  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563:/home/docker/cp-test.txt ha-138563-m04:/home/docker/cp-test_ha-138563_ha-138563-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563 "sudo cat /home/docker/cp-test.txt"
E1213 13:31:25.901451  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m04 "sudo cat /home/docker/cp-test_ha-138563_ha-138563-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp testdata/cp-test.txt ha-138563-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970770729/001/cp-test_ha-138563-m02.txt
E1213 13:31:27.183196  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m02:/home/docker/cp-test.txt ha-138563:/home/docker/cp-test_ha-138563-m02_ha-138563.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563 "sudo cat /home/docker/cp-test_ha-138563-m02_ha-138563.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m02:/home/docker/cp-test.txt ha-138563-m03:/home/docker/cp-test_ha-138563-m02_ha-138563-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m03 "sudo cat /home/docker/cp-test_ha-138563-m02_ha-138563-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m02:/home/docker/cp-test.txt ha-138563-m04:/home/docker/cp-test_ha-138563-m02_ha-138563-m04.txt
E1213 13:31:29.744628  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m04 "sudo cat /home/docker/cp-test_ha-138563-m02_ha-138563-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp testdata/cp-test.txt ha-138563-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970770729/001/cp-test_ha-138563-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m03:/home/docker/cp-test.txt ha-138563:/home/docker/cp-test_ha-138563-m03_ha-138563.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563 "sudo cat /home/docker/cp-test_ha-138563-m03_ha-138563.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m03:/home/docker/cp-test.txt ha-138563-m02:/home/docker/cp-test_ha-138563-m03_ha-138563-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m02 "sudo cat /home/docker/cp-test_ha-138563-m03_ha-138563-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m03:/home/docker/cp-test.txt ha-138563-m04:/home/docker/cp-test_ha-138563-m03_ha-138563-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m04 "sudo cat /home/docker/cp-test_ha-138563-m03_ha-138563-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp testdata/cp-test.txt ha-138563-m04:/home/docker/cp-test.txt
E1213 13:31:34.866359  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970770729/001/cp-test_ha-138563-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m04:/home/docker/cp-test.txt ha-138563:/home/docker/cp-test_ha-138563-m04_ha-138563.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563 "sudo cat /home/docker/cp-test_ha-138563-m04_ha-138563.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m04:/home/docker/cp-test.txt ha-138563-m02:/home/docker/cp-test_ha-138563-m04_ha-138563-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m02 "sudo cat /home/docker/cp-test_ha-138563-m04_ha-138563-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 cp ha-138563-m04:/home/docker/cp-test.txt ha-138563-m03:/home/docker/cp-test_ha-138563-m04_ha-138563-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 ssh -n ha-138563-m03 "sudo cat /home/docker/cp-test_ha-138563-m04_ha-138563-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 node stop m02 --alsologtostderr -v 5
E1213 13:31:45.108459  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 node stop m02 --alsologtostderr -v 5: (12.032388605s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-138563 status --alsologtostderr -v 5: exit status 7 (702.22369ms)

                                                
                                                
-- stdout --
	ha-138563
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-138563-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-138563-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-138563-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:31:50.877768  505537 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:31:50.877889  505537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:31:50.877898  505537 out.go:374] Setting ErrFile to fd 2...
	I1213 13:31:50.877902  505537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:31:50.878115  505537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:31:50.878352  505537 out.go:368] Setting JSON to false
	I1213 13:31:50.878381  505537 mustload.go:66] Loading cluster: ha-138563
	I1213 13:31:50.878517  505537 notify.go:221] Checking for updates...
	I1213 13:31:50.878878  505537 config.go:182] Loaded profile config "ha-138563": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:31:50.878895  505537 status.go:174] checking status of ha-138563 ...
	I1213 13:31:50.879349  505537 cli_runner.go:164] Run: docker container inspect ha-138563 --format={{.State.Status}}
	I1213 13:31:50.898168  505537 status.go:371] ha-138563 host status = "Running" (err=<nil>)
	I1213 13:31:50.898189  505537 host.go:66] Checking if "ha-138563" exists ...
	I1213 13:31:50.898514  505537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-138563
	I1213 13:31:50.916592  505537 host.go:66] Checking if "ha-138563" exists ...
	I1213 13:31:50.916892  505537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:31:50.916939  505537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-138563
	I1213 13:31:50.936415  505537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33177 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/ha-138563/id_rsa Username:docker}
	I1213 13:31:51.030374  505537 ssh_runner.go:195] Run: systemctl --version
	I1213 13:31:51.037411  505537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:31:51.049929  505537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:31:51.110934  505537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 13:31:51.097787156 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:31:51.111718  505537 kubeconfig.go:125] found "ha-138563" server: "https://192.168.49.254:8443"
	I1213 13:31:51.111761  505537 api_server.go:166] Checking apiserver status ...
	I1213 13:31:51.111819  505537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:31:51.124917  505537 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W1213 13:31:51.133944  505537 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:31:51.134002  505537 ssh_runner.go:195] Run: ls
	I1213 13:31:51.137850  505537 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 13:31:51.142271  505537 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 13:31:51.142298  505537 status.go:463] ha-138563 apiserver status = Running (err=<nil>)
	I1213 13:31:51.142308  505537 status.go:176] ha-138563 status: &{Name:ha-138563 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:31:51.142338  505537 status.go:174] checking status of ha-138563-m02 ...
	I1213 13:31:51.142647  505537 cli_runner.go:164] Run: docker container inspect ha-138563-m02 --format={{.State.Status}}
	I1213 13:31:51.161232  505537 status.go:371] ha-138563-m02 host status = "Stopped" (err=<nil>)
	I1213 13:31:51.161253  505537 status.go:384] host is not running, skipping remaining checks
	I1213 13:31:51.161260  505537 status.go:176] ha-138563-m02 status: &{Name:ha-138563-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:31:51.161288  505537 status.go:174] checking status of ha-138563-m03 ...
	I1213 13:31:51.161563  505537 cli_runner.go:164] Run: docker container inspect ha-138563-m03 --format={{.State.Status}}
	I1213 13:31:51.179462  505537 status.go:371] ha-138563-m03 host status = "Running" (err=<nil>)
	I1213 13:31:51.179488  505537 host.go:66] Checking if "ha-138563-m03" exists ...
	I1213 13:31:51.179763  505537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-138563-m03
	I1213 13:31:51.198715  505537 host.go:66] Checking if "ha-138563-m03" exists ...
	I1213 13:31:51.199078  505537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:31:51.199136  505537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-138563-m03
	I1213 13:31:51.217872  505537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/ha-138563-m03/id_rsa Username:docker}
	I1213 13:31:51.311896  505537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:31:51.325608  505537 kubeconfig.go:125] found "ha-138563" server: "https://192.168.49.254:8443"
	I1213 13:31:51.325638  505537 api_server.go:166] Checking apiserver status ...
	I1213 13:31:51.325675  505537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:31:51.336990  505537 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1317/cgroup
	W1213 13:31:51.345933  505537 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1317/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:31:51.345998  505537 ssh_runner.go:195] Run: ls
	I1213 13:31:51.349852  505537 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 13:31:51.354016  505537 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 13:31:51.354039  505537 status.go:463] ha-138563-m03 apiserver status = Running (err=<nil>)
	I1213 13:31:51.354050  505537 status.go:176] ha-138563-m03 status: &{Name:ha-138563-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:31:51.354070  505537 status.go:174] checking status of ha-138563-m04 ...
	I1213 13:31:51.354342  505537 cli_runner.go:164] Run: docker container inspect ha-138563-m04 --format={{.State.Status}}
	I1213 13:31:51.373004  505537 status.go:371] ha-138563-m04 host status = "Running" (err=<nil>)
	I1213 13:31:51.373032  505537 host.go:66] Checking if "ha-138563-m04" exists ...
	I1213 13:31:51.373280  505537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-138563-m04
	I1213 13:31:51.391261  505537 host.go:66] Checking if "ha-138563-m04" exists ...
	I1213 13:31:51.391594  505537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:31:51.391646  505537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-138563-m04
	I1213 13:31:51.409921  505537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/ha-138563-m04/id_rsa Username:docker}
	I1213 13:31:51.504625  505537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:31:51.517589  505537 status.go:176] ha-138563-m04 status: &{Name:ha-138563-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 node start m02 --alsologtostderr -v 5: (7.626884492s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (94.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 stop --alsologtostderr -v 5
E1213 13:32:05.590226  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:32:17.853139  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 stop --alsologtostderr -v 5: (37.238385052s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 start --wait true --alsologtostderr -v 5
E1213 13:32:46.551604  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:19.815928  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:19.822332  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:19.833679  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:19.855082  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:19.896642  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:19.978225  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:20.139786  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:20.461463  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:21.103662  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:22.385128  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:24.946462  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:30.068062  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 start --wait true --alsologtostderr -v 5: (56.637788671s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (94.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 node delete m03 --alsologtostderr -v 5
E1213 13:33:40.309523  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:40.916521  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 node delete m03 --alsologtostderr -v 5: (8.581693871s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 stop --alsologtostderr -v 5
E1213 13:34:00.791359  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:34:08.473217  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 stop --alsologtostderr -v 5: (36.015299938s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-138563 status --alsologtostderr -v 5: exit status 7 (121.516743ms)

                                                
                                                
-- stdout --
	ha-138563
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-138563-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-138563-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:34:21.960540  521798 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:34:21.960678  521798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:34:21.960690  521798 out.go:374] Setting ErrFile to fd 2...
	I1213 13:34:21.960694  521798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:34:21.960901  521798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:34:21.961129  521798 out.go:368] Setting JSON to false
	I1213 13:34:21.961159  521798 mustload.go:66] Loading cluster: ha-138563
	I1213 13:34:21.961287  521798 notify.go:221] Checking for updates...
	I1213 13:34:21.961656  521798 config.go:182] Loaded profile config "ha-138563": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:34:21.961676  521798 status.go:174] checking status of ha-138563 ...
	I1213 13:34:21.962236  521798 cli_runner.go:164] Run: docker container inspect ha-138563 --format={{.State.Status}}
	I1213 13:34:21.983591  521798 status.go:371] ha-138563 host status = "Stopped" (err=<nil>)
	I1213 13:34:21.983611  521798 status.go:384] host is not running, skipping remaining checks
	I1213 13:34:21.983621  521798 status.go:176] ha-138563 status: &{Name:ha-138563 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:34:21.983655  521798 status.go:174] checking status of ha-138563-m02 ...
	I1213 13:34:21.983968  521798 cli_runner.go:164] Run: docker container inspect ha-138563-m02 --format={{.State.Status}}
	I1213 13:34:22.001570  521798 status.go:371] ha-138563-m02 host status = "Stopped" (err=<nil>)
	I1213 13:34:22.001604  521798 status.go:384] host is not running, skipping remaining checks
	I1213 13:34:22.001612  521798 status.go:176] ha-138563-m02 status: &{Name:ha-138563-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:34:22.001632  521798 status.go:174] checking status of ha-138563-m04 ...
	I1213 13:34:22.001888  521798 cli_runner.go:164] Run: docker container inspect ha-138563-m04 --format={{.State.Status}}
	I1213 13:34:22.020061  521798 status.go:371] ha-138563-m04 host status = "Stopped" (err=<nil>)
	I1213 13:34:22.020110  521798 status.go:384] host is not running, skipping remaining checks
	I1213 13:34:22.020117  521798 status.go:176] ha-138563-m04 status: &{Name:ha-138563-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1213 13:34:41.753545  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (56.501130364s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 node add --control-plane --alsologtostderr -v 5
E1213 13:36:03.676311  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:36:24.613529  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-138563 node add --control-plane --alsologtostderr -v 5: (1m11.908403401s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-138563 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.71s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-714771 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1213 13:36:52.315445  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:37:17.855205  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-714771 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (39.714093036s)
--- PASS: TestJSONOutput/start/Command (39.71s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-714771 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-714771 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-714771 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-714771 --output=json --user=testUser: (5.882242361s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-001943 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-001943 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (80.62873ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f30fa1a6-3c45-4448-8f60-2823917681ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-001943] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"942b1e84-6a8d-41d2-9c4d-30046dea5c75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22122"}}
	{"specversion":"1.0","id":"c6fc5a04-0499-4c0d-8bd5-0c03897c2563","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2e354f19-0187-4e80-aeb1-86dd6b0edb45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig"}}
	{"specversion":"1.0","id":"feda07bb-1afd-4077-b6a6-a15b3a52a3e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube"}}
	{"specversion":"1.0","id":"c277ecb8-8828-472d-9ec5-cd738ceed80d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e2688341-6f3e-47c5-883b-5fd39d3950b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5e9f7f99-c021-4cbe-9a12-c8ed2cb8c3e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-001943" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-001943
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-486350 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-486350 --network=: (28.648319149s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-486350" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-486350
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-486350: (2.157013947s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.82s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-220278 --network=bridge
E1213 13:38:19.816832  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-220278 --network=bridge: (21.274899226s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-220278" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-220278
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-220278: (1.992134346s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.29s)

                                                
                                    
x
+
TestKicExistingNetwork (22.35s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 13:38:27.332798  405531 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 13:38:27.351365  405531 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 13:38:27.351428  405531 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 13:38:27.351450  405531 cli_runner.go:164] Run: docker network inspect existing-network
W1213 13:38:27.368219  405531 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 13:38:27.368247  405531 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 13:38:27.368261  405531 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 13:38:27.368447  405531 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 13:38:27.385516  405531 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dd549186b5b6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:8b:bc:e4:2d:3c} reservation:<nil>}
I1213 13:38:27.385924  405531 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c4bdf0}
I1213 13:38:27.385957  405531 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 13:38:27.386003  405531 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 13:38:27.433935  405531 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-959270 --network=existing-network
E1213 13:38:47.517810  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-959270 --network=existing-network: (20.204254112s)
helpers_test.go:176: Cleaning up "existing-network-959270" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-959270
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-959270: (2.010229799s)
I1213 13:38:49.666214  405531 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.35s)

                                                
                                    
x
+
TestKicCustomSubnet (25.77s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-137571 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-137571 --subnet=192.168.60.0/24: (23.630830722s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-137571 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-137571" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-137571
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-137571: (2.120191149s)
--- PASS: TestKicCustomSubnet (25.77s)

                                                
                                    
x
+
TestKicStaticIP (25.44s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-535854 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-535854 --static-ip=192.168.200.200: (23.125330263s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-535854 ip
helpers_test.go:176: Cleaning up "static-ip-535854" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-535854
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-535854: (2.159832717s)
--- PASS: TestKicStaticIP (25.44s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-920562 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-920562 --driver=docker  --container-runtime=containerd: (23.752112584s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-922612 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-922612 --driver=docker  --container-runtime=containerd: (20.673733356s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-920562
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-922612
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-922612" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-922612
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-922612: (1.943956341s)
helpers_test.go:176: Cleaning up "first-920562" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-920562
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-920562: (2.357438148s)
--- PASS: TestMinikubeProfile (49.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-270257 --memory=3072 --mount-string /tmp/TestMountStartserial4030741206/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-270257 --memory=3072 --mount-string /tmp/TestMountStartserial4030741206/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.395950781s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-270257 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-288767 --memory=3072 --mount-string /tmp/TestMountStartserial4030741206/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-288767 --memory=3072 --mount-string /tmp/TestMountStartserial4030741206/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.413773616s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-288767 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-270257 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-270257 --alsologtostderr -v=5: (1.669145094s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-288767 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-288767
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-288767: (1.256388063s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.45s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-288767
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-288767: (6.448889933s)
--- PASS: TestMountStart/serial/RestartStopped (7.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-288767 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-973418 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1213 13:41:24.613472  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-973418 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.765178139s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-973418 -- rollout status deployment/busybox: (3.340983572s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- exec busybox-7b57f96db7-56ls7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- exec busybox-7b57f96db7-594s2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- exec busybox-7b57f96db7-56ls7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- exec busybox-7b57f96db7-594s2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- exec busybox-7b57f96db7-56ls7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- exec busybox-7b57f96db7-594s2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- exec busybox-7b57f96db7-56ls7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- exec busybox-7b57f96db7-56ls7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- exec busybox-7b57f96db7-594s2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973418 -- exec busybox-7b57f96db7-594s2 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-973418 -v=5 --alsologtostderr
E1213 13:42:17.853593  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-973418 -v=5 --alsologtostderr: (23.270926288s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.92s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-973418 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp testdata/cp-test.txt multinode-973418:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp multinode-973418:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2294620969/001/cp-test_multinode-973418.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp multinode-973418:/home/docker/cp-test.txt multinode-973418-m02:/home/docker/cp-test_multinode-973418_multinode-973418-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m02 "sudo cat /home/docker/cp-test_multinode-973418_multinode-973418-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp multinode-973418:/home/docker/cp-test.txt multinode-973418-m03:/home/docker/cp-test_multinode-973418_multinode-973418-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m03 "sudo cat /home/docker/cp-test_multinode-973418_multinode-973418-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp testdata/cp-test.txt multinode-973418-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp multinode-973418-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2294620969/001/cp-test_multinode-973418-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp multinode-973418-m02:/home/docker/cp-test.txt multinode-973418:/home/docker/cp-test_multinode-973418-m02_multinode-973418.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418 "sudo cat /home/docker/cp-test_multinode-973418-m02_multinode-973418.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp multinode-973418-m02:/home/docker/cp-test.txt multinode-973418-m03:/home/docker/cp-test_multinode-973418-m02_multinode-973418-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m03 "sudo cat /home/docker/cp-test_multinode-973418-m02_multinode-973418-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp testdata/cp-test.txt multinode-973418-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp multinode-973418-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2294620969/001/cp-test_multinode-973418-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp multinode-973418-m03:/home/docker/cp-test.txt multinode-973418:/home/docker/cp-test_multinode-973418-m03_multinode-973418.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418 "sudo cat /home/docker/cp-test_multinode-973418-m03_multinode-973418.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 cp multinode-973418-m03:/home/docker/cp-test.txt multinode-973418-m02:/home/docker/cp-test_multinode-973418-m03_multinode-973418-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 ssh -n multinode-973418-m02 "sudo cat /home/docker/cp-test_multinode-973418-m03_multinode-973418-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-973418 node stop m03: (1.252053552s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-973418 status: exit status 7 (493.836446ms)

                                                
                                                
-- stdout --
	multinode-973418
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-973418-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-973418-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-973418 status --alsologtostderr: exit status 7 (498.089885ms)

                                                
                                                
-- stdout --
	multinode-973418
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-973418-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-973418-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:42:38.404426  584220 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:42:38.404691  584220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:42:38.404700  584220 out.go:374] Setting ErrFile to fd 2...
	I1213 13:42:38.404704  584220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:42:38.404963  584220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:42:38.405182  584220 out.go:368] Setting JSON to false
	I1213 13:42:38.405213  584220 mustload.go:66] Loading cluster: multinode-973418
	I1213 13:42:38.405257  584220 notify.go:221] Checking for updates...
	I1213 13:42:38.405650  584220 config.go:182] Loaded profile config "multinode-973418": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:42:38.405667  584220 status.go:174] checking status of multinode-973418 ...
	I1213 13:42:38.406136  584220 cli_runner.go:164] Run: docker container inspect multinode-973418 --format={{.State.Status}}
	I1213 13:42:38.425472  584220 status.go:371] multinode-973418 host status = "Running" (err=<nil>)
	I1213 13:42:38.425495  584220 host.go:66] Checking if "multinode-973418" exists ...
	I1213 13:42:38.425779  584220 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-973418
	I1213 13:42:38.445000  584220 host.go:66] Checking if "multinode-973418" exists ...
	I1213 13:42:38.445309  584220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:42:38.445380  584220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-973418
	I1213 13:42:38.463623  584220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33297 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/multinode-973418/id_rsa Username:docker}
	I1213 13:42:38.556527  584220 ssh_runner.go:195] Run: systemctl --version
	I1213 13:42:38.562742  584220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:42:38.574795  584220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:42:38.633603  584220 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-13 13:42:38.62388982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:42:38.634180  584220 kubeconfig.go:125] found "multinode-973418" server: "https://192.168.67.2:8443"
	I1213 13:42:38.634225  584220 api_server.go:166] Checking apiserver status ...
	I1213 13:42:38.634270  584220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:42:38.646818  584220 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1347/cgroup
	W1213 13:42:38.655459  584220 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1347/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:42:38.655519  584220 ssh_runner.go:195] Run: ls
	I1213 13:42:38.659199  584220 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 13:42:38.663620  584220 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 13:42:38.663641  584220 status.go:463] multinode-973418 apiserver status = Running (err=<nil>)
	I1213 13:42:38.663651  584220 status.go:176] multinode-973418 status: &{Name:multinode-973418 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:42:38.663667  584220 status.go:174] checking status of multinode-973418-m02 ...
	I1213 13:42:38.663919  584220 cli_runner.go:164] Run: docker container inspect multinode-973418-m02 --format={{.State.Status}}
	I1213 13:42:38.682445  584220 status.go:371] multinode-973418-m02 host status = "Running" (err=<nil>)
	I1213 13:42:38.682471  584220 host.go:66] Checking if "multinode-973418-m02" exists ...
	I1213 13:42:38.682735  584220 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-973418-m02
	I1213 13:42:38.700809  584220 host.go:66] Checking if "multinode-973418-m02" exists ...
	I1213 13:42:38.701065  584220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:42:38.701114  584220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-973418-m02
	I1213 13:42:38.718077  584220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33302 SSHKeyPath:/home/jenkins/minikube-integration/22122-401936/.minikube/machines/multinode-973418-m02/id_rsa Username:docker}
	I1213 13:42:38.811283  584220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:42:38.824302  584220 status.go:176] multinode-973418-m02 status: &{Name:multinode-973418-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:42:38.824344  584220 status.go:174] checking status of multinode-973418-m03 ...
	I1213 13:42:38.824608  584220 cli_runner.go:164] Run: docker container inspect multinode-973418-m03 --format={{.State.Status}}
	I1213 13:42:38.841646  584220 status.go:371] multinode-973418-m03 host status = "Stopped" (err=<nil>)
	I1213 13:42:38.841669  584220 status.go:384] host is not running, skipping remaining checks
	I1213 13:42:38.841676  584220 status.go:176] multinode-973418-m03 status: &{Name:multinode-973418-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-973418 node start m03 -v=5 --alsologtostderr: (6.146922764s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (71.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-973418
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-973418
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-973418: (24.969915475s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-973418 --wait=true -v=5 --alsologtostderr
E1213 13:43:19.816173  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-973418 --wait=true -v=5 --alsologtostderr: (46.098631366s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-973418
--- PASS: TestMultiNode/serial/RestartKeepsNodes (71.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-973418 node delete m03: (4.63870857s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-973418 stop: (23.832038471s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-973418 status: exit status 7 (101.002998ms)

                                                
                                                
-- stdout --
	multinode-973418
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-973418-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-973418 status --alsologtostderr: exit status 7 (102.248867ms)

                                                
                                                
-- stdout --
	multinode-973418
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-973418-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:44:26.118786  593922 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:44:26.118931  593922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:44:26.118942  593922 out.go:374] Setting ErrFile to fd 2...
	I1213 13:44:26.118949  593922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:44:26.119169  593922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:44:26.119368  593922 out.go:368] Setting JSON to false
	I1213 13:44:26.119400  593922 mustload.go:66] Loading cluster: multinode-973418
	I1213 13:44:26.119519  593922 notify.go:221] Checking for updates...
	I1213 13:44:26.119939  593922 config.go:182] Loaded profile config "multinode-973418": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:44:26.119959  593922 status.go:174] checking status of multinode-973418 ...
	I1213 13:44:26.120622  593922 cli_runner.go:164] Run: docker container inspect multinode-973418 --format={{.State.Status}}
	I1213 13:44:26.140492  593922 status.go:371] multinode-973418 host status = "Stopped" (err=<nil>)
	I1213 13:44:26.140517  593922 status.go:384] host is not running, skipping remaining checks
	I1213 13:44:26.140524  593922 status.go:176] multinode-973418 status: &{Name:multinode-973418 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:44:26.140546  593922 status.go:174] checking status of multinode-973418-m02 ...
	I1213 13:44:26.140850  593922 cli_runner.go:164] Run: docker container inspect multinode-973418-m02 --format={{.State.Status}}
	I1213 13:44:26.159655  593922 status.go:371] multinode-973418-m02 host status = "Stopped" (err=<nil>)
	I1213 13:44:26.159707  593922 status.go:384] host is not running, skipping remaining checks
	I1213 13:44:26.159722  593922 status.go:176] multinode-973418-m02 status: &{Name:multinode-973418-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-973418 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-973418 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (44.168083714s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973418 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-973418
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-973418-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-973418-m02 --driver=docker  --container-runtime=containerd: exit status 14 (78.825798ms)

                                                
                                                
-- stdout --
	* [multinode-973418-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-973418-m02' is duplicated with machine name 'multinode-973418-m02' in profile 'multinode-973418'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-973418-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-973418-m03 --driver=docker  --container-runtime=containerd: (21.995443185s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-973418
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-973418: exit status 80 (299.859636ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-973418 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-973418-m03 already exists in multinode-973418-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-973418-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-973418-m03: (2.359668362s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.80s)

                                                
                                    
x
+
TestPreload (98.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-791980 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1213 13:46:24.614013  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-791980 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (45.116663855s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-791980 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-791980 image pull gcr.io/k8s-minikube/busybox: (2.335640317s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-791980
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-791980: (5.758856473s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-791980 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-791980 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (42.945912083s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-791980 image list
helpers_test.go:176: Cleaning up "test-preload-791980" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-791980
E1213 13:47:17.853619  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-791980: (2.408931677s)
--- PASS: TestPreload (98.80s)

                                                
                                    
x
+
TestScheduledStopUnix (98.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-307148 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-307148 --memory=3072 --driver=docker  --container-runtime=containerd: (22.615394024s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-307148 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 13:47:41.427574  612118 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:47:41.427684  612118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:41.427690  612118 out.go:374] Setting ErrFile to fd 2...
	I1213 13:47:41.427695  612118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:41.427918  612118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:47:41.428193  612118 out.go:368] Setting JSON to false
	I1213 13:47:41.428303  612118 mustload.go:66] Loading cluster: scheduled-stop-307148
	I1213 13:47:41.428649  612118 config.go:182] Loaded profile config "scheduled-stop-307148": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:47:41.428736  612118 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/config.json ...
	I1213 13:47:41.428950  612118 mustload.go:66] Loading cluster: scheduled-stop-307148
	I1213 13:47:41.429077  612118 config.go:182] Loaded profile config "scheduled-stop-307148": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-307148 -n scheduled-stop-307148
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-307148 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 13:47:41.820544  612270 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:47:41.820803  612270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:41.820811  612270 out.go:374] Setting ErrFile to fd 2...
	I1213 13:47:41.820826  612270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:41.821011  612270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:47:41.821249  612270 out.go:368] Setting JSON to false
	I1213 13:47:41.821447  612270 daemonize_unix.go:73] killing process 612153 as it is an old scheduled stop
	I1213 13:47:41.821553  612270 mustload.go:66] Loading cluster: scheduled-stop-307148
	I1213 13:47:41.821889  612270 config.go:182] Loaded profile config "scheduled-stop-307148": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:47:41.821963  612270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/config.json ...
	I1213 13:47:41.822147  612270 mustload.go:66] Loading cluster: scheduled-stop-307148
	I1213 13:47:41.822241  612270 config.go:182] Loaded profile config "scheduled-stop-307148": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 13:47:41.827768  405531 retry.go:31] will retry after 69.231µs: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.828918  405531 retry.go:31] will retry after 165.562µs: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.830069  405531 retry.go:31] will retry after 277.184µs: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.831236  405531 retry.go:31] will retry after 402.001µs: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.832405  405531 retry.go:31] will retry after 324.172µs: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.833535  405531 retry.go:31] will retry after 520.419µs: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.834666  405531 retry.go:31] will retry after 1.689823ms: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.836851  405531 retry.go:31] will retry after 2.2861ms: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.840064  405531 retry.go:31] will retry after 2.459996ms: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.843227  405531 retry.go:31] will retry after 3.81766ms: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.847444  405531 retry.go:31] will retry after 7.189209ms: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.855663  405531 retry.go:31] will retry after 4.897661ms: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.860900  405531 retry.go:31] will retry after 16.605389ms: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.878147  405531 retry.go:31] will retry after 12.221534ms: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.891398  405531 retry.go:31] will retry after 18.9165ms: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
I1213 13:47:41.910656  405531 retry.go:31] will retry after 48.433974ms: open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-307148 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1213 13:47:47.678879  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-307148 -n scheduled-stop-307148
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-307148
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-307148 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 13:48:07.736235  613148 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:48:07.736486  613148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:48:07.736495  613148 out.go:374] Setting ErrFile to fd 2...
	I1213 13:48:07.736499  613148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:48:07.736718  613148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:48:07.737013  613148 out.go:368] Setting JSON to false
	I1213 13:48:07.737090  613148 mustload.go:66] Loading cluster: scheduled-stop-307148
	I1213 13:48:07.737404  613148 config.go:182] Loaded profile config "scheduled-stop-307148": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:48:07.737473  613148 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/scheduled-stop-307148/config.json ...
	I1213 13:48:07.737651  613148 mustload.go:66] Loading cluster: scheduled-stop-307148
	I1213 13:48:07.737751  613148 config.go:182] Loaded profile config "scheduled-stop-307148": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1213 13:48:19.816667  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-307148
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-307148: exit status 7 (84.093904ms)

                                                
                                                
-- stdout --
	scheduled-stop-307148
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-307148 -n scheduled-stop-307148
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-307148 -n scheduled-stop-307148: exit status 7 (80.891952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-307148" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-307148
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-307148: (4.572595938s)
--- PASS: TestScheduledStopUnix (98.73s)

                                                
                                    
x
+
TestInsufficientStorage (11.56s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-909898 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-909898 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.070512579s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b1af90db-52b2-455c-aac0-657cff9600fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-909898] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"83cd184f-6e60-49ec-992c-f99b0807208d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22122"}}
	{"specversion":"1.0","id":"6585949b-1e9d-4d11-920d-2f56170c53e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d8bc46f4-572c-4ae6-8923-3bf5e434cf52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig"}}
	{"specversion":"1.0","id":"5137ef7f-82de-480d-8f1a-4e98c2711124","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube"}}
	{"specversion":"1.0","id":"22e9ced4-eebc-49e2-99ee-ee5799b3b371","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"befedbd4-31d5-4ee1-b0de-b459d40cfe78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a1738ba-ec99-4d34-a096-815e2139a662","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2a61e680-9ab9-4a0d-a1fa-4473f26fc716","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c058c9c8-d3db-4a2b-b6a7-756a8e8da010","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9983adc-563d-4649-aa73-a509b8158d6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1b309ba3-24b0-45a3-ae7b-4abad1227d50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-909898\" primary control-plane node in \"insufficient-storage-909898\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c31837a-9038-47af-8ea0-2fbc5534890a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"017a5e04-fe32-4286-af53-0b1923683e68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce464fa4-65e0-432c-ad27-1249b63912a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-909898 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-909898 --output=json --layout=cluster: exit status 7 (288.042405ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-909898","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-909898","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 13:49:06.829168  615434 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-909898" does not appear in /home/jenkins/minikube-integration/22122-401936/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-909898 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-909898 --output=json --layout=cluster: exit status 7 (288.32391ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-909898","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-909898","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 13:49:07.117600  615546 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-909898" does not appear in /home/jenkins/minikube-integration/22122-401936/kubeconfig
	E1213 13:49:07.127862  615546 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/insufficient-storage-909898/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-909898" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-909898
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-909898: (1.910984831s)
--- PASS: TestInsufficientStorage (11.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (299.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2543202883 start -p running-upgrade-207105 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2543202883 start -p running-upgrade-207105 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (27.442692348s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-207105 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1213 13:50:20.918538  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-207105 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m26.630922186s)
helpers_test.go:176: Cleaning up "running-upgrade-207105" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-207105
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-207105: (1.970760726s)
--- PASS: TestRunningBinaryUpgrade (299.32s)

                                                
                                    
x
+
TestMissingContainerUpgrade (102.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1084136550 start -p missing-upgrade-454056 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1084136550 start -p missing-upgrade-454056 --memory=3072 --driver=docker  --container-runtime=containerd: (19.889509057s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-454056
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-454056
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-454056 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1213 13:52:17.853397  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:53:19.816284  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-454056 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m16.608971632s)
helpers_test.go:176: Cleaning up "missing-upgrade-454056" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-454056
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-454056: (2.379294125s)
--- PASS: TestMissingContainerUpgrade (102.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (327.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2585761012 start -p stopped-upgrade-445440 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2585761012 start -p stopped-upgrade-445440 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (53.278212725s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2585761012 -p stopped-upgrade-445440 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2585761012 -p stopped-upgrade-445440 stop: (1.257696664s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-445440 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-445440 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m33.353122189s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (327.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-424192 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-424192 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (106.073324ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-424192] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-424192 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-424192 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.831462481s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-424192 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-424192 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-424192 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (3.821619334s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-424192 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-424192 status -o json: exit status 2 (298.762056ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-424192","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-424192
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-424192: (1.949983234s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (3.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-424192 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-424192 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (3.602469671s)
--- PASS: TestNoKubernetes/serial/Start (3.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22122-401936/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-424192 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-424192 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.01329ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (48.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (16.423220237s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E1213 13:51:24.613539  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (32.025432654s)
--- PASS: TestNoKubernetes/serial/ProfileList (48.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-424192
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-424192: (1.271238301s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-424192 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-424192 --driver=docker  --container-runtime=containerd: (6.657993279s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-424192 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-424192 "sudo systemctl is-active --quiet service kubelet": exit status 1 (312.337212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestPause/serial/Start (42.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-573146 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-573146 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (42.569608105s)
--- PASS: TestPause/serial/Start (42.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-573146 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-573146 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.745180512s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.76s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-573146 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-573146 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-573146 --output=json --layout=cluster: exit status 2 (342.431027ms)

                                                
                                                
-- stdout --
	{"Name":"pause-573146","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-573146","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-573146 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-573146 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.73s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-573146 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-573146 --alsologtostderr -v=5: (2.732276582s)
--- PASS: TestPause/serial/DeletePaused (2.73s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (19.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (19.136085135s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-573146
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-573146: exit status 1 (21.881717ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-573146: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (19.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-445440
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-445440: (2.762251122s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-603819 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-603819 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (191.908685ms)

                                                
                                                
-- stdout --
	* [false-603819] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:55:13.484721  691179 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:55:13.485097  691179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:55:13.485112  691179 out.go:374] Setting ErrFile to fd 2...
	I1213 13:55:13.485118  691179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:55:13.485467  691179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-401936/.minikube/bin
	I1213 13:55:13.486138  691179 out.go:368] Setting JSON to false
	I1213 13:55:13.487750  691179 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9456,"bootTime":1765624657,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:55:13.487823  691179 start.go:143] virtualization: kvm guest
	I1213 13:55:13.489914  691179 out.go:179] * [false-603819] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:55:13.492560  691179 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:55:13.492552  691179 notify.go:221] Checking for updates...
	I1213 13:55:13.494975  691179 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:55:13.496138  691179 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-401936/kubeconfig
	I1213 13:55:13.497533  691179 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-401936/.minikube
	I1213 13:55:13.499155  691179 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:55:13.500488  691179 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:55:13.502357  691179 config.go:182] Loaded profile config "cert-expiration-913044": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:55:13.502519  691179 config.go:182] Loaded profile config "cert-options-652721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 13:55:13.502669  691179 config.go:182] Loaded profile config "kubernetes-upgrade-205521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 13:55:13.502813  691179 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:55:13.529911  691179 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:55:13.530012  691179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:55:13.593430  691179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 13:55:13.582288478 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:55:13.593553  691179 docker.go:319] overlay module found
	I1213 13:55:13.598367  691179 out.go:179] * Using the docker driver based on user configuration
	I1213 13:55:13.599755  691179 start.go:309] selected driver: docker
	I1213 13:55:13.599771  691179 start.go:927] validating driver "docker" against <nil>
	I1213 13:55:13.599784  691179 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:55:13.601505  691179 out.go:203] 
	W1213 13:55:13.602933  691179 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1213 13:55:13.604088  691179 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-603819 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-603819" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:55:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-913044
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:55:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8555
name: cert-options-652721
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:50:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-205521
contexts:
- context:
cluster: cert-expiration-913044
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:55:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-913044
name: cert-expiration-913044
- context:
cluster: cert-options-652721
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:55:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-options-652721
name: cert-options-652721
- context:
cluster: kubernetes-upgrade-205521
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:50:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-205521
name: kubernetes-upgrade-205521
current-context: cert-options-652721
kind: Config
users:
- name: cert-expiration-913044
user:
client-certificate: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/cert-expiration-913044/client.crt
client-key: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/cert-expiration-913044/client.key
- name: cert-options-652721
user:
client-certificate: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/cert-options-652721/client.crt
client-key: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/cert-options-652721/client.key
- name: kubernetes-upgrade-205521
user:
client-certificate: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.crt
client-key: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-603819

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603819"

                                                
                                                
----------------------- debugLogs end: false-603819 [took: 3.535325035s] --------------------------------
helpers_test.go:176: Cleaning up "false-603819" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-603819
--- PASS: TestNetworkPlugins/group/false (3.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (48.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-759693 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-759693 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (48.983221296s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (48.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (48.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-173346 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-173346 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (48.283396095s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (48.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-759693 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d503ad70-8bdd-414d-94f2-b92b699f9f37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d503ad70-8bdd-414d-94f2-b92b699f9f37] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003911667s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-759693 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-173346 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [af5ed113-e732-4f50-8701-6a5ad8f11e0f] Pending
helpers_test.go:353: "busybox" [af5ed113-e732-4f50-8701-6a5ad8f11e0f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [af5ed113-e732-4f50-8701-6a5ad8f11e0f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003736905s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-173346 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-759693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-759693 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-759693 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-759693 --alsologtostderr -v=3: (12.042344198s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-173346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-173346 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-173346 --alsologtostderr -v=3
E1213 13:56:24.613646  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-173346 --alsologtostderr -v=3: (12.104310922s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-759693 -n old-k8s-version-759693
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-759693 -n old-k8s-version-759693: exit status 7 (84.992979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-759693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-759693 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-759693 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (44.474326121s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-759693 -n old-k8s-version-759693
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-173346 -n no-preload-173346
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-173346 -n no-preload-173346: exit status 7 (89.926319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-173346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-173346 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-173346 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (49.751240789s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-173346 -n no-preload-173346
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5wv2z" [3b8e6f11-fe31-4555-97f6-c570145087d0] Running
E1213 13:57:17.852685  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/addons-824997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003122368s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5wv2z" [3b8e6f11-fe31-4555-97f6-c570145087d0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003805476s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-759693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-759693 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-759693 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-759693 -n old-k8s-version-759693
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-759693 -n old-k8s-version-759693: exit status 2 (326.039464ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-759693 -n old-k8s-version-759693
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-759693 -n old-k8s-version-759693: exit status 2 (344.440733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-759693 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-759693 -n old-k8s-version-759693
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-759693 -n old-k8s-version-759693
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-7xfh4" [ff8fddab-9724-490f-ad1f-88692f4edd8f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003596151s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-871380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-871380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (39.586136026s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (39.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-7xfh4" [ff8fddab-9724-490f-ad1f-88692f4edd8f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004087002s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-173346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-173346 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-173346 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-173346 -n no-preload-173346
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-173346 -n no-preload-173346: exit status 2 (336.457673ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-173346 -n no-preload-173346
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-173346 -n no-preload-173346: exit status 2 (324.858989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-173346 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-173346 -n no-preload-173346
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-173346 -n no-preload-173346
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-264183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-264183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (38.090533642s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-871380 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [828ae2f2-911d-4dec-8799-ebade0d71e25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [828ae2f2-911d-4dec-8799-ebade0d71e25] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.006771581s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-871380 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (22.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-542830 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-542830 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (22.419925373s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (22.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-871380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-871380 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-871380 --alsologtostderr -v=3
E1213 13:58:19.815883  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-017456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-871380 --alsologtostderr -v=3: (12.127563238s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-264183 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c8fb7d15-ddb5-499d-9b7f-6e4fa065a47d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c8fb7d15-ddb5-499d-9b7f-6e4fa065a47d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004007086s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-264183 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-264183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-264183 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-871380 -n embed-certs-871380
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-871380 -n embed-certs-871380: exit status 7 (91.30208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-871380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-871380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-871380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (49.535346999s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-871380 -n embed-certs-871380
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-264183 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-264183 --alsologtostderr -v=3: (12.101480009s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-542830 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-542830 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-542830 --alsologtostderr -v=3: (1.31743294s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-542830 -n newest-cni-542830
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-542830 -n newest-cni-542830: exit status 7 (90.177732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-542830 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-542830 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-542830 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (11.389236814s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-542830 -n newest-cni-542830
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-264183 -n default-k8s-diff-port-264183
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-264183 -n default-k8s-diff-port-264183: exit status 7 (95.690785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-264183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-264183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-264183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (51.472837482s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-264183 -n default-k8s-diff-port-264183
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-542830 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-542830 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-542830 -n newest-cni-542830
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-542830 -n newest-cni-542830: exit status 2 (416.697041ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-542830 -n newest-cni-542830
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-542830 -n newest-cni-542830: exit status 2 (352.613688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-542830 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-542830 -n newest-cni-542830
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-542830 -n newest-cni-542830
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (42.131909035s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (38.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (38.731533034s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (38.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-lsmkc" [b11bded4-09f8-43d3-ab12-5594080d191c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003021314s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-lsmkc" [b11bded4-09f8-43d3-ab12-5594080d191c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003803416s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-871380 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-871380 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-871380 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-871380 -n embed-certs-871380
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-871380 -n embed-certs-871380: exit status 2 (353.957368ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-871380 -n embed-certs-871380
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-871380 -n embed-certs-871380: exit status 2 (333.048535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-871380 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-871380 -n embed-certs-871380
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-871380 -n embed-certs-871380
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ngv9q" [aa23da5e-5b06-471f-8c73-9418070b5b82] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004244903s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-603819 "pgrep -a kubelet"
I1213 13:59:37.879970  405531 config.go:182] Loaded profile config "auto-603819": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-603819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cw7tr" [734dcd7f-5802-46cc-8b89-50d3cf532f76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cw7tr" [734dcd7f-5802-46cc-8b89-50d3cf532f76] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004426557s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (52.719690657s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ngv9q" [aa23da5e-5b06-471f-8c73-9418070b5b82] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003598523s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-264183 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-603819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-bwcgw" [ffad06e4-2bb1-4bcb-b537-e7c6f9925ac1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003823497s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-264183 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-264183 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-264183 -n default-k8s-diff-port-264183
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-264183 -n default-k8s-diff-port-264183: exit status 2 (336.043818ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-264183 -n default-k8s-diff-port-264183
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-264183 -n default-k8s-diff-port-264183: exit status 2 (323.535157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-264183 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-264183 -n default-k8s-diff-port-264183
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-264183 -n default-k8s-diff-port-264183
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.90s)
E1213 14:01:10.465605  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:11.823468  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:11.829983  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:11.842109  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:11.863559  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:11.905080  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:11.986556  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:12.148573  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:12.470540  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:13.111970  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:14.394174  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:15.587031  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:16.956248  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-603819 "pgrep -a kubelet"
I1213 13:59:53.508527  405531 config.go:182] Loaded profile config "kindnet-603819": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-603819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qft56" [195f62e9-40b2-464e-bc2b-7dac68d70006] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qft56" [195f62e9-40b2-464e-bc2b-7dac68d70006] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003820234s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (36.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (36.774362913s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (36.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-603819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m11.21561463s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (53.628631351s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-603819 "pgrep -a kubelet"
I1213 14:00:31.119210  405531 config.go:182] Loaded profile config "enable-default-cni-603819": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-603819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-q6kp4" [b5f0e298-ea2e-4383-94c3-71777f7b2616] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-q6kp4" [b5f0e298-ea2e-4383-94c3-71777f7b2616] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.0047022s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-szf2l" [b0a588c5-be40-47be-ae12-fd997dd28ae8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004089087s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-603819 "pgrep -a kubelet"
I1213 14:00:37.980290  405531 config.go:182] Loaded profile config "flannel-603819": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-603819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-nw72j" [eb16d0dc-a6da-4414-ae34-5dc51e2b7899] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-nw72j" [eb16d0dc-a6da-4414-ae34-5dc51e2b7899] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004374388s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-603819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-603819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1213 14:01:05.333662  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:05.340122  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:05.351580  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:05.373485  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:05.414931  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:05.496409  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:05.658474  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:05.980299  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:06.621910  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-603819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (51.822486605s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-603819 "pgrep -a kubelet"
I1213 14:01:18.906649  405531 config.go:182] Loaded profile config "custom-flannel-603819": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-603819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-crb9r" [84fd5828-78dc-4514-bb68-6194d1213c1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-crb9r" [84fd5828-78dc-4514-bb68-6194d1213c1d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00337524s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-603819 "pgrep -a kubelet"
I1213 14:01:19.502835  405531 config.go:182] Loaded profile config "bridge-603819": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-603819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-4z7q9" [792bcd26-9bda-44e6-993c-df69f35d8e05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 14:01:22.078562  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-4z7q9" [792bcd26-9bda-44e6-993c-df69f35d8e05] Running
E1213 14:01:24.614260  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/functional-217219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:01:25.829017  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/old-k8s-version-759693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003707638s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-603819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-603819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-gt5xt" [818892ac-b2d3-4903-a208-9dd41a86e69a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1213 14:01:52.802750  405531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/no-preload-173346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "calico-node-gt5xt" [818892ac-b2d3-4903-a208-9dd41a86e69a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004286604s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-603819 "pgrep -a kubelet"
I1213 14:01:58.474920  405531 config.go:182] Loaded profile config "calico-603819": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-603819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-l9xh4" [18b7a1eb-670b-406e-ada2-3a503e6a7acd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-l9xh4" [18b7a1eb-670b-406e-ada2-3a503e6a7acd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.003865637s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-603819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-603819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    

Test skip (33/420)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
156 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
157 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
366 TestStartStop/group/disable-driver-mounts 0.18
393 TestNetworkPlugins/group/kubenet 3.92
403 TestNetworkPlugins/group/cilium 6.08
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-909187" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-909187
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-603819 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-603819" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:55:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-913044
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:50:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-205521
contexts:
- context:
cluster: cert-expiration-913044
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:55:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-913044
name: cert-expiration-913044
- context:
cluster: kubernetes-upgrade-205521
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:50:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-205521
name: kubernetes-upgrade-205521
current-context: ""
kind: Config
users:
- name: cert-expiration-913044
user:
client-certificate: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/cert-expiration-913044/client.crt
client-key: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/cert-expiration-913044/client.key
- name: kubernetes-upgrade-205521
user:
client-certificate: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.crt
client-key: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-603819

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603819"

                                                
                                                
----------------------- debugLogs end: kubenet-603819 [took: 3.727089546s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-603819" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-603819
--- SKIP: TestNetworkPlugins/group/kubenet (3.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-603819 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-603819" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:55:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-913044
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-401936/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:50:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-205521
contexts:
- context:
cluster: cert-expiration-913044
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:55:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-913044
name: cert-expiration-913044
- context:
cluster: kubernetes-upgrade-205521
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:50:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-205521
name: kubernetes-upgrade-205521
current-context: ""
kind: Config
users:
- name: cert-expiration-913044
user:
client-certificate: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/cert-expiration-913044/client.crt
client-key: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/cert-expiration-913044/client.key
- name: kubernetes-upgrade-205521
user:
client-certificate: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.crt
client-key: /home/jenkins/minikube-integration/22122-401936/.minikube/profiles/kubernetes-upgrade-205521/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-603819

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-603819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603819"

                                                
                                                
----------------------- debugLogs end: cilium-603819 [took: 5.897517996s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-603819" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-603819
--- SKIP: TestNetworkPlugins/group/cilium (6.08s)

                                                
                                    
Copied to clipboard