Test Report: KVM_Linux_containerd 22081

                    
                      502ebf1e50e408071a7e5daf27f82abd53674654:2025-12-09:42698
                    
                

Test fail (2/437)

Order failed test Duration
53 TestAddons/parallel/LocalPath 344.92
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 302.14
x
+
TestAddons/parallel/LocalPath (344.92s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:1009: (dbg) Run:  kubectl --context addons-520986 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:1015: (dbg) Run:  kubectl --context addons-520986 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:1019: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-520986 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (4.676µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:1020: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-520986 -n addons-520986
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 logs -n 25: (1.111374618s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                     ARGS                                                                                                                                                                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-000021                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-000021 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ --download-only -p binary-mirror-212052 --alsologtostderr --binary-mirror http://127.0.0.1:45175 --driver=kvm2  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-212052 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ -p binary-mirror-212052                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ binary-mirror-212052 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ addons  │ enable dashboard -p addons-520986                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-520986                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ start   │ -p addons-520986 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:57 UTC │
	│ addons  │ addons-520986 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ addons-520986 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ enable headlamp -p addons-520986 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ addons-520986 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ ip      │ addons-520986 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ ssh     │ addons-520986 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ ip      │ addons-520986 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-520986                                                                                                                                                                                                                                                                                                                                                                                               │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-520986 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-520986        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:46
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:46.610879  790270 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:46.611046  790270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:46.611058  790270 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:46.611066  790270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:46.611351  790270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 01:55:46.611957  790270 out.go:368] Setting JSON to false
	I1209 01:55:46.613003  790270 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":27497,"bootTime":1765217850,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:46.613059  790270 start.go:143] virtualization: kvm guest
	I1209 01:55:46.614992  790270 out.go:179] * [addons-520986] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 01:55:46.616309  790270 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 01:55:46.616318  790270 notify.go:221] Checking for updates...
	I1209 01:55:46.617693  790270 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:46.619025  790270 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 01:55:46.620313  790270 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 01:55:46.621477  790270 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 01:55:46.622714  790270 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 01:55:46.624056  790270 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 01:55:46.654512  790270 out.go:179] * Using the kvm2 driver based on user configuration
	I1209 01:55:46.655808  790270 start.go:309] selected driver: kvm2
	I1209 01:55:46.655826  790270 start.go:927] validating driver "kvm2" against <nil>
	I1209 01:55:46.655844  790270 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 01:55:46.656615  790270 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 01:55:46.656852  790270 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 01:55:46.656881  790270 cni.go:84] Creating CNI manager for ""
	I1209 01:55:46.656923  790270 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1209 01:55:46.656933  790270 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 01:55:46.656967  790270 start.go:353] cluster config:
	{Name:addons-520986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-520986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 01:55:46.657069  790270 iso.go:125] acquiring lock: {Name:mk29a40ab0d6eac4567e308b5229766210ecee59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 01:55:46.658537  790270 out.go:179] * Starting "addons-520986" primary control-plane node in "addons-520986" cluster
	I1209 01:55:46.659719  790270 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 01:55:46.659758  790270 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1209 01:55:46.659771  790270 cache.go:65] Caching tarball of preloaded images
	I1209 01:55:46.659886  790270 preload.go:238] Found /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1209 01:55:46.659902  790270 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1209 01:55:46.660286  790270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/config.json ...
	I1209 01:55:46.660316  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/config.json: {Name:mk463a364962037a7aec4eadbec0594317e59ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:55:46.660485  790270 start.go:360] acquireMachinesLock for addons-520986: {Name:mk20d7a910149185835b082cbce91d316616a54e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 01:55:46.664826  790270 start.go:364] duration metric: took 4.320734ms to acquireMachinesLock for "addons-520986"
	I1209 01:55:46.664861  790270 start.go:93] Provisioning new machine with config: &{Name:addons-520986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-520986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 01:55:46.664932  790270 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 01:55:46.666460  790270 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1209 01:55:46.666658  790270 start.go:159] libmachine.API.Create for "addons-520986" (driver="kvm2")
	I1209 01:55:46.666687  790270 client.go:173] LocalClient.Create starting
	I1209 01:55:46.666782  790270 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem
	I1209 01:55:46.869161  790270 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/cert.pem
	I1209 01:55:46.913708  790270 main.go:143] libmachine: creating domain...
	I1209 01:55:46.913734  790270 main.go:143] libmachine: creating network...
	I1209 01:55:46.915344  790270 main.go:143] libmachine: found existing default network
	I1209 01:55:46.915598  790270 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 01:55:46.916312  790270 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fea830}
	I1209 01:55:46.916480  790270 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-520986</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 01:55:46.922242  790270 main.go:143] libmachine: creating private network mk-addons-520986 192.168.39.0/24...
	I1209 01:55:46.994407  790270 main.go:143] libmachine: private network mk-addons-520986 192.168.39.0/24 created
	I1209 01:55:46.994727  790270 main.go:143] libmachine: <network>
	  <name>mk-addons-520986</name>
	  <uuid>66b68d57-147c-423c-94b4-2860291daa67</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:0f:22:b7'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 01:55:46.994763  790270 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986 ...
	I1209 01:55:46.994788  790270 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22081-785489/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1209 01:55:46.994800  790270 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 01:55:46.994902  790270 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22081-785489/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22081-785489/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1209 01:55:47.259552  790270 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa...
	I1209 01:55:47.451640  790270 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/addons-520986.rawdisk...
	I1209 01:55:47.451696  790270 main.go:143] libmachine: Writing magic tar header
	I1209 01:55:47.451747  790270 main.go:143] libmachine: Writing SSH key tar header
	I1209 01:55:47.451867  790270 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986 ...
	I1209 01:55:47.451944  790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986
	I1209 01:55:47.451982  790270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986 (perms=drwx------)
	I1209 01:55:47.452000  790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-785489/.minikube/machines
	I1209 01:55:47.452017  790270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-785489/.minikube/machines (perms=drwxr-xr-x)
	I1209 01:55:47.452034  790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 01:55:47.452047  790270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-785489/.minikube (perms=drwxr-xr-x)
	I1209 01:55:47.452065  790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-785489
	I1209 01:55:47.452078  790270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-785489 (perms=drwxrwxr-x)
	I1209 01:55:47.452094  790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1209 01:55:47.452115  790270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 01:55:47.452146  790270 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1209 01:55:47.452159  790270 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 01:55:47.452171  790270 main.go:143] libmachine: checking permissions on dir: /home
	I1209 01:55:47.452185  790270 main.go:143] libmachine: skipping /home - not owner
	I1209 01:55:47.452191  790270 main.go:143] libmachine: defining domain...
	I1209 01:55:47.453672  790270 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-520986</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/addons-520986.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-520986'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1209 01:55:47.458941  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:c7:27:4e in network default
	I1209 01:55:47.459508  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:47.459524  790270 main.go:143] libmachine: starting domain...
	I1209 01:55:47.459528  790270 main.go:143] libmachine: ensuring networks are active...
	I1209 01:55:47.460454  790270 main.go:143] libmachine: Ensuring network default is active
	I1209 01:55:47.460840  790270 main.go:143] libmachine: Ensuring network mk-addons-520986 is active
	I1209 01:55:47.461495  790270 main.go:143] libmachine: getting domain XML...
	I1209 01:55:47.462574  790270 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-520986</name>
	  <uuid>c1934cbe-8219-4512-9b02-72a0810d6e14</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/addons-520986.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:09:0b:7a'/>
	      <source network='mk-addons-520986'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:c7:27:4e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1209 01:55:48.728187  790270 main.go:143] libmachine: waiting for domain to start...
	I1209 01:55:48.729684  790270 main.go:143] libmachine: domain is now running
	I1209 01:55:48.729707  790270 main.go:143] libmachine: waiting for IP...
	I1209 01:55:48.730521  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:48.731245  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:48.731264  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:48.731525  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:48.731612  790270 retry.go:31] will retry after 301.745583ms: waiting for domain to come up
	I1209 01:55:49.035367  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:49.036121  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:49.036153  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:49.036650  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:49.036706  790270 retry.go:31] will retry after 286.232228ms: waiting for domain to come up
	I1209 01:55:49.324213  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:49.325170  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:49.325187  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:49.325568  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:49.325638  790270 retry.go:31] will retry after 330.013419ms: waiting for domain to come up
	I1209 01:55:49.657466  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:49.658530  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:49.658552  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:49.658904  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:49.658943  790270 retry.go:31] will retry after 428.77108ms: waiting for domain to come up
	I1209 01:55:50.089689  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:50.090440  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:50.090456  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:50.090834  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:50.090878  790270 retry.go:31] will retry after 657.210018ms: waiting for domain to come up
	I1209 01:55:50.749853  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:50.750838  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:50.750860  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:50.751269  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:50.751316  790270 retry.go:31] will retry after 833.998265ms: waiting for domain to come up
	I1209 01:55:51.587393  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:51.588051  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:51.588067  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:51.588389  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:51.588423  790270 retry.go:31] will retry after 1.135020025s: waiting for domain to come up
	I1209 01:55:52.724811  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:52.725426  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:52.725446  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:52.725924  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:52.725975  790270 retry.go:31] will retry after 1.455514481s: waiting for domain to come up
	I1209 01:55:54.183732  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:54.184417  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:54.184438  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:54.184796  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:54.184837  790270 retry.go:31] will retry after 1.286485281s: waiting for domain to come up
	I1209 01:55:55.473478  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:55.474294  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:55.474316  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:55.474698  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:55.474747  790270 retry.go:31] will retry after 1.434846567s: waiting for domain to come up
	I1209 01:55:56.911490  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:56.912405  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:56.912431  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:56.912815  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:56.912873  790270 retry.go:31] will retry after 2.620673714s: waiting for domain to come up
	I1209 01:55:59.536454  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:55:59.537336  790270 main.go:143] libmachine: no network interface addresses found for domain addons-520986 (source=lease)
	I1209 01:55:59.537353  790270 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:59.537815  790270 main.go:143] libmachine: unable to find current IP address of domain addons-520986 in network mk-addons-520986 (interfaces detected: [])
	I1209 01:55:59.537855  790270 retry.go:31] will retry after 3.559268644s: waiting for domain to come up
	I1209 01:56:03.099218  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.099958  790270 main.go:143] libmachine: domain addons-520986 has current primary IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.099982  790270 main.go:143] libmachine: found domain IP: 192.168.39.56
	I1209 01:56:03.099991  790270 main.go:143] libmachine: reserving static IP address...
	I1209 01:56:03.100497  790270 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-520986", mac: "52:54:00:09:0b:7a", ip: "192.168.39.56"} in network mk-addons-520986
	I1209 01:56:03.297793  790270 main.go:143] libmachine: reserved static IP address 192.168.39.56 for domain addons-520986
	I1209 01:56:03.297824  790270 main.go:143] libmachine: waiting for SSH...
	I1209 01:56:03.297834  790270 main.go:143] libmachine: Getting to WaitForSSH function...
	I1209 01:56:03.301739  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.302287  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:minikube Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:03.302317  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.302500  790270 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:03.302731  790270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1209 01:56:03.302749  790270 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1209 01:56:03.414057  790270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 01:56:03.414513  790270 main.go:143] libmachine: domain creation complete
	I1209 01:56:03.416083  790270 machine.go:94] provisionDockerMachine start ...
	I1209 01:56:03.418831  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.419268  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:03.419289  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.419436  790270 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:03.419692  790270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1209 01:56:03.419708  790270 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 01:56:03.530029  790270 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 01:56:03.530076  790270 buildroot.go:166] provisioning hostname "addons-520986"
	I1209 01:56:03.533410  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.533919  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:03.533952  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.534141  790270 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:03.534381  790270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1209 01:56:03.534397  790270 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-520986 && echo "addons-520986" | sudo tee /etc/hostname
	I1209 01:56:03.664001  790270 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-520986
	
	I1209 01:56:03.667045  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.667510  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:03.667537  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.667731  790270 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:03.667965  790270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1209 01:56:03.667982  790270 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-520986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-520986/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-520986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 01:56:03.792735  790270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 01:56:03.792766  790270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22081-785489/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-785489/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-785489/.minikube}
	I1209 01:56:03.792835  790270 buildroot.go:174] setting up certificates
	I1209 01:56:03.792853  790270 provision.go:84] configureAuth start
	I1209 01:56:03.796087  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.796672  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:03.796703  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.799506  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.799913  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:03.799940  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.800113  790270 provision.go:143] copyHostCerts
	I1209 01:56:03.800218  790270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-785489/.minikube/ca.pem (1078 bytes)
	I1209 01:56:03.800389  790270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-785489/.minikube/cert.pem (1123 bytes)
	I1209 01:56:03.800475  790270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-785489/.minikube/key.pem (1675 bytes)
	I1209 01:56:03.800540  790270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-785489/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca-key.pem org=jenkins.addons-520986 san=[127.0.0.1 192.168.39.56 addons-520986 localhost minikube]
	I1209 01:56:03.832656  790270 provision.go:177] copyRemoteCerts
	I1209 01:56:03.832718  790270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 01:56:03.835172  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.835522  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:03.835545  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:03.835701  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:03.922765  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 01:56:03.951020  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 01:56:03.978975  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 01:56:04.007889  790270 provision.go:87] duration metric: took 215.015763ms to configureAuth
	I1209 01:56:04.007920  790270 buildroot.go:189] setting minikube options for container-runtime
	I1209 01:56:04.008108  790270 config.go:182] Loaded profile config "addons-520986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 01:56:04.008119  790270 machine.go:97] duration metric: took 592.016858ms to provisionDockerMachine
	I1209 01:56:04.008126  790270 client.go:176] duration metric: took 17.341434216s to LocalClient.Create
	I1209 01:56:04.008166  790270 start.go:167] duration metric: took 17.341508459s to libmachine.API.Create "addons-520986"
	I1209 01:56:04.008179  790270 start.go:293] postStartSetup for "addons-520986" (driver="kvm2")
	I1209 01:56:04.008189  790270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 01:56:04.008247  790270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 01:56:04.010896  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.011299  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:04.011331  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.011518  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:04.098859  790270 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 01:56:04.104319  790270 info.go:137] Remote host: Buildroot 2025.02
	I1209 01:56:04.104363  790270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-785489/.minikube/addons for local assets ...
	I1209 01:56:04.104433  790270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-785489/.minikube/files for local assets ...
	I1209 01:56:04.104458  790270 start.go:296] duration metric: took 96.272363ms for postStartSetup
	I1209 01:56:04.107612  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.108072  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:04.108096  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.108333  790270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/config.json ...
	I1209 01:56:04.108558  790270 start.go:128] duration metric: took 17.443612929s to createHost
	I1209 01:56:04.110845  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.111207  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:04.111241  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.111428  790270 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:04.111680  790270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1209 01:56:04.111691  790270 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 01:56:04.224888  790270 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765245364.183702467
	
	I1209 01:56:04.224916  790270 fix.go:216] guest clock: 1765245364.183702467
	I1209 01:56:04.224929  790270 fix.go:229] Guest: 2025-12-09 01:56:04.183702467 +0000 UTC Remote: 2025-12-09 01:56:04.108573478 +0000 UTC m=+17.546947163 (delta=75.128989ms)
	I1209 01:56:04.224946  790270 fix.go:200] guest clock delta is within tolerance: 75.128989ms
	I1209 01:56:04.224952  790270 start.go:83] releasing machines lock for "addons-520986", held for 17.560105488s
	I1209 01:56:04.228231  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.228724  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:04.228763  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.229380  790270 ssh_runner.go:195] Run: cat /version.json
	I1209 01:56:04.229510  790270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 01:56:04.232240  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.232458  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.232734  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:04.232774  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.232997  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:04.233022  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:04.233042  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:04.233264  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:04.341562  790270 ssh_runner.go:195] Run: systemctl --version
	I1209 01:56:04.347979  790270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 01:56:04.354408  790270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 01:56:04.354481  790270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 01:56:04.374477  790270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 01:56:04.374501  790270 start.go:496] detecting cgroup driver to use...
	I1209 01:56:04.374581  790270 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 01:56:04.406466  790270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 01:56:04.422568  790270 docker.go:218] disabling cri-docker service (if available) ...
	I1209 01:56:04.422630  790270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 01:56:04.440028  790270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 01:56:04.456080  790270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 01:56:04.600497  790270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 01:56:04.812027  790270 docker.go:234] disabling docker service ...
	I1209 01:56:04.812096  790270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 01:56:04.829739  790270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 01:56:04.846377  790270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 01:56:05.009047  790270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 01:56:05.158428  790270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 01:56:05.174807  790270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 01:56:05.198468  790270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 01:56:05.212840  790270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 01:56:05.227227  790270 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 01:56:05.227304  790270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 01:56:05.240326  790270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 01:56:05.252984  790270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 01:56:05.266394  790270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 01:56:05.279077  790270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 01:56:05.293715  790270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 01:56:05.307344  790270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 01:56:05.319706  790270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 01:56:05.334271  790270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 01:56:05.348371  790270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 01:56:05.348440  790270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 01:56:05.371313  790270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 01:56:05.383686  790270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:05.522835  790270 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 01:56:05.564735  790270 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 01:56:05.564834  790270 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 01:56:05.570831  790270 retry.go:31] will retry after 971.714769ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1209 01:56:06.543044  790270 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 01:56:06.549461  790270 start.go:564] Will wait 60s for crictl version
	I1209 01:56:06.549552  790270 ssh_runner.go:195] Run: which crictl
	I1209 01:56:06.554041  790270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 01:56:06.587556  790270 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.4
	RuntimeApiVersion:  v1
	I1209 01:56:06.587649  790270 ssh_runner.go:195] Run: containerd --version
	I1209 01:56:06.609435  790270 ssh_runner.go:195] Run: containerd --version
	I1209 01:56:06.632744  790270 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.1.4 ...
	I1209 01:56:06.637392  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:06.637810  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:06.637834  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:06.638074  790270 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 01:56:06.643262  790270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 01:56:06.659403  790270 kubeadm.go:884] updating cluster {Name:addons-520986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-520986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 01:56:06.659576  790270 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 01:56:06.659655  790270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:06.690826  790270 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1209 01:56:06.690913  790270 ssh_runner.go:195] Run: which lz4
	I1209 01:56:06.695296  790270 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 01:56:06.700114  790270 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 01:56:06.700161  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (339763354 bytes)
	I1209 01:56:07.985220  790270 containerd.go:563] duration metric: took 1.289978143s to copy over tarball
	I1209 01:56:07.985302  790270 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 01:56:09.445921  790270 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.460587753s)
	I1209 01:56:09.445956  790270 containerd.go:570] duration metric: took 1.460704454s to extract the tarball
	I1209 01:56:09.445966  790270 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 01:56:09.487466  790270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:09.648199  790270 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 01:56:09.701644  790270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:09.730620  790270 retry.go:31] will retry after 126.27895ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:56:09Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1209 01:56:09.858060  790270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:09.886334  790270 retry.go:31] will retry after 424.832912ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:56:09Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1209 01:56:10.312118  790270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:10.339693  790270 retry.go:31] will retry after 484.563011ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:56:10Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1209 01:56:10.824419  790270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:10.850102  790270 retry.go:31] will retry after 589.37792ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:56:10Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1209 01:56:11.439968  790270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:11.468075  790270 retry.go:31] will retry after 813.68456ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:56:11Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1209 01:56:12.282326  790270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:12.317343  790270 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 01:56:12.317376  790270 cache_images.go:86] Images are preloaded, skipping loading
	I1209 01:56:12.317393  790270 kubeadm.go:935] updating node { 192.168.39.56 8443 v1.34.2 containerd true true} ...
	I1209 01:56:12.317526  790270 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-520986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-520986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 01:56:12.317595  790270 ssh_runner.go:195] Run: sudo crictl info
	I1209 01:56:12.349483  790270 cni.go:84] Creating CNI manager for ""
	I1209 01:56:12.349509  790270 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1209 01:56:12.349528  790270 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 01:56:12.349554  790270 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-520986 NodeName:addons-520986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 01:56:12.349687  790270 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-520986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.56"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 01:56:12.349783  790270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 01:56:12.362577  790270 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 01:56:12.362652  790270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 01:56:12.374618  790270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1209 01:56:12.396009  790270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 01:56:12.416092  790270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1209 01:56:12.436476  790270 ssh_runner.go:195] Run: grep 192.168.39.56	control-plane.minikube.internal$ /etc/hosts
	I1209 01:56:12.441066  790270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 01:56:12.455877  790270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:12.591465  790270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 01:56:12.611175  790270 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986 for IP: 192.168.39.56
	I1209 01:56:12.611201  790270 certs.go:195] generating shared ca certs ...
	I1209 01:56:12.611225  790270 certs.go:227] acquiring lock for ca certs: {Name:mk11c7b39a751cc374cf1934fc2b19c48b37e451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:12.612106  790270 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-785489/.minikube/ca.key
	I1209 01:56:12.717616  790270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt ...
	I1209 01:56:12.717656  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt: {Name:mk3c1e8d6ffe211e2671c48707faf8e00f4bdfdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:12.718573  790270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-785489/.minikube/ca.key ...
	I1209 01:56:12.718611  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/ca.key: {Name:mk3bfe1a0273ff33aafb468c655fc6f6c7cb7e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:12.719266  790270 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.key
	I1209 01:56:12.773171  790270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.crt ...
	I1209 01:56:12.773206  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.crt: {Name:mkd07f91f87087eb7f45edc8239dd6bb28ef0ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:12.774208  790270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.key ...
	I1209 01:56:12.774240  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.key: {Name:mk45152197778e8fc7475822cf1b22c0f6930e5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:12.780328  790270 certs.go:257] generating profile certs ...
	I1209 01:56:12.780433  790270 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.key
	I1209 01:56:12.780456  790270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt with IP's: []
	I1209 01:56:12.885339  790270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt ...
	I1209 01:56:12.885375  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: {Name:mkda60a8b805fd51a3e5a7f872d0d54e37aec82e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:12.886444  790270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.key ...
	I1209 01:56:12.886469  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.key: {Name:mk766b64f28c7278b4ccbf6b51f6b04776f69cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:12.886596  790270 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key.a27b7307
	I1209 01:56:12.886621  790270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt.a27b7307 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56]
	I1209 01:56:13.101574  790270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt.a27b7307 ...
	I1209 01:56:13.101611  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt.a27b7307: {Name:mk3b9116a9247d2a26910be7cd2e57868f7f8ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:13.101791  790270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key.a27b7307 ...
	I1209 01:56:13.101804  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key.a27b7307: {Name:mkf3e2fd911864f34dd44598c3b837bc37d8a606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:13.101879  790270 certs.go:382] copying /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt.a27b7307 -> /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt
	I1209 01:56:13.101974  790270 certs.go:386] copying /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key.a27b7307 -> /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key
	I1209 01:56:13.102021  790270 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.key
	I1209 01:56:13.102040  790270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.crt with IP's: []
	I1209 01:56:13.162823  790270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.crt ...
	I1209 01:56:13.162853  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.crt: {Name:mk527dd9917c037e6b7b6e09620ff9010fdb7478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:13.163838  790270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.key ...
	I1209 01:56:13.163870  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.key: {Name:mk012b14cbed5aea3498605aac03a6c7a0c5f8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:13.164064  790270 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 01:56:13.164114  790270 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/ca.pem (1078 bytes)
	I1209 01:56:13.164156  790270 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/cert.pem (1123 bytes)
	I1209 01:56:13.164181  790270 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-785489/.minikube/certs/key.pem (1675 bytes)
	I1209 01:56:13.164765  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 01:56:13.196720  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 01:56:13.225933  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 01:56:13.255269  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 01:56:13.287240  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 01:56:13.318402  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 01:56:13.348599  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 01:56:13.377320  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 01:56:13.405727  790270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 01:56:13.434160  790270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 01:56:13.454184  790270 ssh_runner.go:195] Run: openssl version
	I1209 01:56:13.460454  790270 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:13.471760  790270 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 01:56:13.483044  790270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:13.488474  790270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:13.488547  790270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:13.495758  790270 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 01:56:13.507195  790270 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 01:56:13.518417  790270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 01:56:13.522993  790270 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 01:56:13.523055  790270 kubeadm.go:401] StartCluster: {Name:addons-520986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-520986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 01:56:13.523167  790270 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 01:56:13.523224  790270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:56:13.556083  790270 cri.go:89] found id: ""
	I1209 01:56:13.556192  790270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 01:56:13.568252  790270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 01:56:13.579746  790270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 01:56:13.591195  790270 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 01:56:13.591212  790270 kubeadm.go:158] found existing configuration files:
	
	I1209 01:56:13.591253  790270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 01:56:13.601777  790270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 01:56:13.601832  790270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 01:56:13.613104  790270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 01:56:13.624152  790270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 01:56:13.624225  790270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 01:56:13.636039  790270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 01:56:13.646291  790270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 01:56:13.646351  790270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 01:56:13.657345  790270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 01:56:13.667792  790270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 01:56:13.667840  790270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 01:56:13.679049  790270 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 01:56:13.728881  790270 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 01:56:13.728964  790270 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 01:56:13.828471  790270 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 01:56:13.828586  790270 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 01:56:13.828682  790270 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 01:56:13.837302  790270 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 01:56:13.840407  790270 out.go:252]   - Generating certificates and keys ...
	I1209 01:56:13.840487  790270 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 01:56:13.840555  790270 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 01:56:14.371848  790270 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 01:56:14.678008  790270 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 01:56:14.944951  790270 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 01:56:15.341403  790270 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 01:56:15.534966  790270 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 01:56:15.535085  790270 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-520986 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
	I1209 01:56:15.589613  790270 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 01:56:15.589781  790270 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-520986 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
	I1209 01:56:15.786020  790270 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 01:56:15.924653  790270 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 01:56:16.047225  790270 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 01:56:16.047416  790270 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 01:56:16.456283  790270 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 01:56:16.646808  790270 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 01:56:16.774703  790270 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 01:56:17.611265  790270 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 01:56:17.816990  790270 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 01:56:17.817398  790270 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 01:56:17.819583  790270 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 01:56:17.821479  790270 out.go:252]   - Booting up control plane ...
	I1209 01:56:17.821597  790270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 01:56:17.821741  790270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 01:56:17.821877  790270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 01:56:17.844021  790270 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 01:56:17.844165  790270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 01:56:17.851307  790270 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 01:56:17.853267  790270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 01:56:17.853326  790270 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 01:56:18.011071  790270 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 01:56:18.011402  790270 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 01:56:19.012244  790270 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00184424s
	I1209 01:56:19.015108  790270 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 01:56:19.015220  790270 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.56:8443/livez
	I1209 01:56:19.015300  790270 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 01:56:19.015364  790270 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 01:56:21.319027  790270 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.305413868s
	I1209 01:56:22.202478  790270 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.189938581s
	I1209 01:56:24.012813  790270 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001548092s
	I1209 01:56:24.039146  790270 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 01:56:24.053804  790270 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 01:56:24.068616  790270 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 01:56:24.068844  790270 kubeadm.go:319] [mark-control-plane] Marking the node addons-520986 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 01:56:24.080147  790270 kubeadm.go:319] [bootstrap-token] Using token: iz5njm.08avfkkb65ug1lvs
	I1209 01:56:24.081418  790270 out.go:252]   - Configuring RBAC rules ...
	I1209 01:56:24.081531  790270 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 01:56:24.091450  790270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 01:56:24.100831  790270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 01:56:24.104094  790270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 01:56:24.107269  790270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 01:56:24.110706  790270 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 01:56:24.419812  790270 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 01:56:24.866699  790270 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 01:56:25.418911  790270 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 01:56:25.419917  790270 kubeadm.go:319] 
	I1209 01:56:25.420015  790270 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 01:56:25.420061  790270 kubeadm.go:319] 
	I1209 01:56:25.420182  790270 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 01:56:25.420192  790270 kubeadm.go:319] 
	I1209 01:56:25.420226  790270 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 01:56:25.420315  790270 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 01:56:25.420413  790270 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 01:56:25.420438  790270 kubeadm.go:319] 
	I1209 01:56:25.420527  790270 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 01:56:25.420537  790270 kubeadm.go:319] 
	I1209 01:56:25.420601  790270 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 01:56:25.420609  790270 kubeadm.go:319] 
	I1209 01:56:25.420676  790270 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 01:56:25.420787  790270 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 01:56:25.420890  790270 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 01:56:25.420899  790270 kubeadm.go:319] 
	I1209 01:56:25.421043  790270 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 01:56:25.421150  790270 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 01:56:25.421158  790270 kubeadm.go:319] 
	I1209 01:56:25.421229  790270 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token iz5njm.08avfkkb65ug1lvs \
	I1209 01:56:25.421319  790270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c11fe7a294546fd865e9cf4259a0b816aae73d916f65cf1122876c70c9af5892 \
	I1209 01:56:25.421339  790270 kubeadm.go:319] 	--control-plane 
	I1209 01:56:25.421343  790270 kubeadm.go:319] 
	I1209 01:56:25.421431  790270 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 01:56:25.421439  790270 kubeadm.go:319] 
	I1209 01:56:25.421528  790270 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token iz5njm.08avfkkb65ug1lvs \
	I1209 01:56:25.421643  790270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c11fe7a294546fd865e9cf4259a0b816aae73d916f65cf1122876c70c9af5892 
	I1209 01:56:25.423681  790270 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 01:56:25.423722  790270 cni.go:84] Creating CNI manager for ""
	I1209 01:56:25.423738  790270 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1209 01:56:25.425355  790270 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 01:56:25.426565  790270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 01:56:25.440377  790270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 01:56:25.467210  790270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 01:56:25.467293  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:25.467294  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-520986 minikube.k8s.io/updated_at=2025_12_09T01_56_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=addons-520986 minikube.k8s.io/primary=true
	I1209 01:56:25.486538  790270 ops.go:34] apiserver oom_adj: -16
	I1209 01:56:25.607098  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:26.107754  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:26.607283  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:27.107393  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:27.608126  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:28.107785  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:28.607608  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:29.108191  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:29.607759  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:30.107593  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:30.607410  790270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:30.744750  790270 kubeadm.go:1114] duration metric: took 5.277526181s to wait for elevateKubeSystemPrivileges
	I1209 01:56:30.744822  790270 kubeadm.go:403] duration metric: took 17.221768805s to StartCluster
	I1209 01:56:30.744852  790270 settings.go:142] acquiring lock: {Name:mke007a994b1310d493b4df603715fb4b029e8ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:30.745603  790270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 01:56:30.746292  790270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-785489/kubeconfig: {Name:mk11cf9ad80d3da3c3f1920bc8be0a3badb85306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:30.747048  790270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 01:56:30.747114  790270 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 01:56:30.747199  790270 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 01:56:30.747350  790270 config.go:182] Loaded profile config "addons-520986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 01:56:30.747367  790270 addons.go:70] Setting cloud-spanner=true in profile "addons-520986"
	I1209 01:56:30.747371  790270 addons.go:70] Setting gcp-auth=true in profile "addons-520986"
	I1209 01:56:30.747356  790270 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-520986"
	I1209 01:56:30.747394  790270 mustload.go:66] Loading cluster: addons-520986
	I1209 01:56:30.747356  790270 addons.go:70] Setting yakd=true in profile "addons-520986"
	I1209 01:56:30.747407  790270 addons.go:239] Setting addon cloud-spanner=true in "addons-520986"
	I1209 01:56:30.747413  790270 addons.go:70] Setting ingress=true in profile "addons-520986"
	I1209 01:56:30.747425  790270 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-520986"
	I1209 01:56:30.747437  790270 addons.go:239] Setting addon ingress=true in "addons-520986"
	I1209 01:56:30.747445  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.747436  790270 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-520986"
	I1209 01:56:30.747456  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.747475  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.747476  790270 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-520986"
	I1209 01:56:30.747479  790270 addons.go:70] Setting volcano=true in profile "addons-520986"
	I1209 01:56:30.747503  790270 addons.go:239] Setting addon volcano=true in "addons-520986"
	I1209 01:56:30.747530  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.747602  790270 config.go:182] Loaded profile config "addons-520986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 01:56:30.747350  790270 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-520986"
	I1209 01:56:30.748259  790270 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-520986"
	I1209 01:56:30.748297  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.748398  790270 addons.go:70] Setting volumesnapshots=true in profile "addons-520986"
	I1209 01:56:30.748413  790270 addons.go:239] Setting addon volumesnapshots=true in "addons-520986"
	I1209 01:56:30.748434  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.748441  790270 addons.go:70] Setting inspektor-gadget=true in profile "addons-520986"
	I1209 01:56:30.748460  790270 addons.go:239] Setting addon inspektor-gadget=true in "addons-520986"
	I1209 01:56:30.747422  790270 addons.go:239] Setting addon yakd=true in "addons-520986"
	I1209 01:56:30.748513  790270 addons.go:70] Setting ingress-dns=true in profile "addons-520986"
	I1209 01:56:30.748535  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.748547  790270 addons.go:70] Setting storage-provisioner=true in profile "addons-520986"
	I1209 01:56:30.748549  790270 addons.go:239] Setting addon ingress-dns=true in "addons-520986"
	I1209 01:56:30.748560  790270 addons.go:239] Setting addon storage-provisioner=true in "addons-520986"
	I1209 01:56:30.748575  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.748584  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.749028  790270 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-520986"
	I1209 01:56:30.749050  790270 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-520986"
	I1209 01:56:30.749078  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.747363  790270 addons.go:70] Setting default-storageclass=true in profile "addons-520986"
	I1209 01:56:30.749267  790270 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-520986"
	I1209 01:56:30.747394  790270 addons.go:70] Setting registry=true in profile "addons-520986"
	I1209 01:56:30.749311  790270 addons.go:239] Setting addon registry=true in "addons-520986"
	I1209 01:56:30.749339  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.748537  790270 addons.go:70] Setting registry-creds=true in profile "addons-520986"
	I1209 01:56:30.749375  790270 addons.go:239] Setting addon registry-creds=true in "addons-520986"
	I1209 01:56:30.749410  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.748493  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.749556  790270 addons.go:70] Setting metrics-server=true in profile "addons-520986"
	I1209 01:56:30.749572  790270 addons.go:239] Setting addon metrics-server=true in "addons-520986"
	I1209 01:56:30.749595  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.750121  790270 out.go:179] * Verifying Kubernetes components...
	I1209 01:56:30.751748  790270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:30.754313  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.755383  790270 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-520986"
	I1209 01:56:30.755413  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.756443  790270 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1209 01:56:30.756476  790270 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1209 01:56:30.757365  790270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 01:56:30.757368  790270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1209 01:56:30.758240  790270 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 01:56:30.758276  790270 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 01:56:30.758327  790270 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1209 01:56:30.758865  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 01:56:30.758956  790270 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 01:56:30.759021  790270 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 01:56:30.759076  790270 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1209 01:56:30.759415  790270 addons.go:239] Setting addon default-storageclass=true in "addons-520986"
	I1209 01:56:30.759950  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:30.759971  790270 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1209 01:56:30.760003  790270 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 01:56:30.760311  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 01:56:30.760002  790270 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 01:56:30.760355  790270 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 01:56:30.760034  790270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 01:56:30.760620  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 01:56:30.760036  790270 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1209 01:56:30.760712  790270 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1209 01:56:30.760723  790270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 01:56:30.760732  790270 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1209 01:56:30.760748  790270 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1209 01:56:30.760756  790270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:30.760769  790270 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1209 01:56:30.760802  790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 01:56:30.761937  790270 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 01:56:30.760837  790270 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 01:56:30.762043  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1209 01:56:30.761375  790270 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 01:56:30.761461  790270 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 01:56:30.762262  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 01:56:30.762980  790270 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 01:56:30.763261  790270 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 01:56:30.763633  790270 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 01:56:30.764059  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1209 01:56:30.763649  790270 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 01:56:30.764121  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1209 01:56:30.763835  790270 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1209 01:56:30.764441  790270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 01:56:30.764447  790270 out.go:179]   - Using image docker.io/registry:3.0.0
	I1209 01:56:30.764908  790270 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 01:56:30.765467  790270 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 01:56:30.765703  790270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:30.765795  790270 out.go:179]   - Using image docker.io/busybox:stable
	I1209 01:56:30.766434  790270 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 01:56:30.766453  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 01:56:30.766839  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.767072  790270 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 01:56:30.767148  790270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 01:56:30.767161  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 01:56:30.767292  790270 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 01:56:30.767316  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 01:56:30.769055  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.769113  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.769553  790270 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1209 01:56:30.769577  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1209 01:56:30.770894  790270 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 01:56:30.771959  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.772718  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.773470  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.774073  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.774178  790270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 01:56:30.774664  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.774706  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.775210  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.775244  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.775628  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.775706  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.775745  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.775941  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.776005  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.776496  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.776791  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.776936  790270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 01:56:30.777195  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.777393  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.777427  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.778030  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.778053  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.778065  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.778340  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.778720  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.779017  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.779052  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.779179  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.779300  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.779404  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.779444  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.779500  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.779788  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.779971  790270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 01:56:30.780006  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.780464  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.780507  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.780619  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.780654  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.780702  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.780729  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.780899  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.781022  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.781173  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.781290  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.781497  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.781361  790270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 01:56:30.781573  790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 01:56:30.781600  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.781762  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.781795  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.781826  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.782232  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.782231  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.782282  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.782492  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.782528  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.782549  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.782799  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.782833  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.782946  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.783420  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:30.785121  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.785671  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:30.785701  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:30.785928  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	W1209 01:56:30.885803  790270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55856->192.168.39.56:22: read: connection reset by peer
	I1209 01:56:30.885852  790270 retry.go:31] will retry after 315.630491ms: ssh: handshake failed: read tcp 192.168.39.1:55856->192.168.39.56:22: read: connection reset by peer
	W1209 01:56:30.885957  790270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55868->192.168.39.56:22: read: connection reset by peer
	I1209 01:56:30.885973  790270 retry.go:31] will retry after 159.894679ms: ssh: handshake failed: read tcp 192.168.39.1:55868->192.168.39.56:22: read: connection reset by peer
	W1209 01:56:31.047406  790270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55912->192.168.39.56:22: read: connection reset by peer
	I1209 01:56:31.047447  790270 retry.go:31] will retry after 517.041324ms: ssh: handshake failed: read tcp 192.168.39.1:55912->192.168.39.56:22: read: connection reset by peer
	I1209 01:56:31.687907  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 01:56:31.928222  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 01:56:31.943684  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 01:56:31.979960  790270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 01:56:31.979989  790270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 01:56:32.007262  790270 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 01:56:32.007296  790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 01:56:32.030729  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 01:56:32.077837  790270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 01:56:32.077866  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 01:56:32.098682  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 01:56:32.115277  790270 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 01:56:32.115311  790270 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 01:56:32.174232  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 01:56:32.191960  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1209 01:56:32.212183  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 01:56:32.270257  790270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.523162924s)
	I1209 01:56:32.270281  790270 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.518500815s)
	I1209 01:56:32.270384  790270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 01:56:32.270485  790270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 01:56:32.343461  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 01:56:32.377118  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 01:56:32.421181  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 01:56:32.534208  790270 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 01:56:32.534239  790270 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 01:56:32.544648  790270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 01:56:32.544672  790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 01:56:32.586284  790270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 01:56:32.586311  790270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 01:56:32.652799  790270 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 01:56:32.652833  790270 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 01:56:32.689440  790270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 01:56:32.689474  790270 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 01:56:32.866152  790270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 01:56:32.866193  790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 01:56:32.950224  790270 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 01:56:32.950265  790270 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 01:56:32.970942  790270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 01:56:32.970971  790270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 01:56:33.012217  790270 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 01:56:33.012249  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 01:56:33.035091  790270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 01:56:33.035125  790270 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 01:56:33.264956  790270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 01:56:33.264986  790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 01:56:33.351866  790270 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 01:56:33.351900  790270 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 01:56:33.377442  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 01:56:33.442186  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 01:56:33.493160  790270 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 01:56:33.493198  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 01:56:33.610018  790270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 01:56:33.610058  790270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 01:56:33.719009  790270 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:33.719037  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 01:56:33.818376  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 01:56:33.915771  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.227805617s)
	I1209 01:56:33.927450  790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 01:56:33.927486  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 01:56:34.260305  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:34.275552  790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 01:56:34.275586  790270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 01:56:34.885261  790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 01:56:34.885287  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 01:56:34.994611  790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 01:56:34.994642  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 01:56:35.364391  790270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 01:56:35.364431  790270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 01:56:35.689860  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 01:56:36.576722  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.648453464s)
	I1209 01:56:38.211302  790270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 01:56:38.214879  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:38.215429  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:38.215477  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:38.215671  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:38.435516  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.491782716s)
	I1209 01:56:38.483449  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.452671472s)
	I1209 01:56:38.483510  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.384783918s)
	I1209 01:56:38.483568  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.309302132s)
	I1209 01:56:38.908824  790270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 01:56:39.160529  790270 addons.go:239] Setting addon gcp-auth=true in "addons-520986"
	I1209 01:56:39.160608  790270 host.go:66] Checking if "addons-520986" exists ...
	I1209 01:56:39.162929  790270 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 01:56:39.165776  790270 main.go:143] libmachine: domain addons-520986 has defined MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:39.166436  790270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:0b:7a", ip: ""} in network mk-addons-520986: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:02 +0000 UTC Type:0 Mac:52:54:00:09:0b:7a Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-520986 Clientid:01:52:54:00:09:0b:7a}
	I1209 01:56:39.166475  790270 main.go:143] libmachine: domain addons-520986 has defined IP address 192.168.39.56 and MAC address 52:54:00:09:0b:7a in network mk-addons-520986
	I1209 01:56:39.166692  790270 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/addons-520986/id_rsa Username:docker}
	I1209 01:56:44.787014  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.574786374s)
	I1209 01:56:44.787091  790270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (12.51654395s)
	I1209 01:56:44.787108  790270 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 01:56:44.787116  790270 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (12.516695671s)
	I1209 01:56:44.787269  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.443769934s)
	I1209 01:56:44.787304  790270 addons.go:495] Verifying addon ingress=true in "addons-520986"
	I1209 01:56:44.787335  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (12.410171254s)
	I1209 01:56:44.787351  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.595355719s)
	I1209 01:56:44.787414  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.409947704s)
	I1209 01:56:44.787442  790270 addons.go:495] Verifying addon registry=true in "addons-520986"
	I1209 01:56:44.787542  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.345316669s)
	I1209 01:56:44.787568  790270 addons.go:495] Verifying addon metrics-server=true in "addons-520986"
	I1209 01:56:44.787603  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.969190073s)
	I1209 01:56:44.788204  790270 node_ready.go:35] waiting up to 6m0s for node "addons-520986" to be "Ready" ...
	I1209 01:56:44.787382  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.366179018s)
	I1209 01:56:44.787747  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.527408329s)
	W1209 01:56:44.788387  790270 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 01:56:44.788412  790270 retry.go:31] will retry after 241.528383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 01:56:44.787982  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.09807641s)
	I1209 01:56:44.788439  790270 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-520986"
	I1209 01:56:44.788011  790270 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.625061466s)
	I1209 01:56:44.788739  790270 out.go:179] * Verifying ingress addon...
	I1209 01:56:44.789727  790270 out.go:179] * Verifying registry addon...
	I1209 01:56:44.789728  790270 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-520986 service yakd-dashboard -n yakd-dashboard
	
	I1209 01:56:44.790505  790270 out.go:179] * Verifying csi-hostpath-driver addon...
	I1209 01:56:44.790515  790270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:44.791200  790270 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 01:56:44.792156  790270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 01:56:44.792188  790270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 01:56:44.793613  790270 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 01:56:44.794827  790270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 01:56:44.794861  790270 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 01:56:44.896959  790270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 01:56:44.896990  790270 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 01:56:44.906527  790270 node_ready.go:49] node "addons-520986" is "Ready"
	I1209 01:56:44.906557  790270 node_ready.go:38] duration metric: took 118.298338ms for node "addons-520986" to be "Ready" ...
	I1209 01:56:44.906573  790270 api_server.go:52] waiting for apiserver process to appear ...
	I1209 01:56:44.906653  790270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 01:56:44.964542  790270 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 01:56:44.964569  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:44.964589  790270 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 01:56:44.964606  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:44.964596  790270 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 01:56:44.964623  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:44.993844  790270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 01:56:44.993870  790270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 01:56:45.004366  790270 api_server.go:72] duration metric: took 14.257187733s to wait for apiserver process to appear ...
	I1209 01:56:45.004389  790270 api_server.go:88] waiting for apiserver healthz status ...
	I1209 01:56:45.004425  790270 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1209 01:56:45.030088  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:45.061793  790270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 01:56:45.066028  790270 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I1209 01:56:45.121762  790270 api_server.go:141] control plane version: v1.34.2
	I1209 01:56:45.121805  790270 api_server.go:131] duration metric: took 117.409361ms to wait for apiserver health ...
	I1209 01:56:45.121814  790270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 01:56:45.200681  790270 system_pods.go:59] 20 kube-system pods found
	I1209 01:56:45.200730  790270 system_pods.go:61] "amd-gpu-device-plugin-465wk" [271974ca-7e3b-4c84-8934-f8e107aceaa3] Running
	I1209 01:56:45.200736  790270 system_pods.go:61] "coredns-66bc5c9577-j5w2c" [9e9c57dc-b6bd-42be-8a3b-f1e10a9fb863] Running
	I1209 01:56:45.200740  790270 system_pods.go:61] "coredns-66bc5c9577-qzn64" [85f77647-b009-4c5e-a48f-443611e37520] Running
	I1209 01:56:45.200748  790270 system_pods.go:61] "csi-hostpath-attacher-0" [16b4ff75-ad5f-4f79-9478-0a122848f9a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:45.200753  790270 system_pods.go:61] "csi-hostpath-resizer-0" [3a1a1237-31f7-4ca1-87a4-02b6d2387c27] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:45.200759  790270 system_pods.go:61] "csi-hostpathplugin-mznj5" [d90dc4bb-01fc-4ff5-9f29-33d2a8cd7c4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:45.200763  790270 system_pods.go:61] "etcd-addons-520986" [eff98cf5-6ef4-4096-9da0-f8f6eab8818b] Running
	I1209 01:56:45.200770  790270 system_pods.go:61] "kube-apiserver-addons-520986" [1c57a257-5404-4891-8de2-64d25b9280fb] Running
	I1209 01:56:45.200773  790270 system_pods.go:61] "kube-controller-manager-addons-520986" [4e29595d-3d0f-4985-a4f3-1b0b0061dbd5] Running
	I1209 01:56:45.200778  790270 system_pods.go:61] "kube-ingress-dns-minikube" [f2d2941a-a050-42ba-966c-f2a4c9f45ecf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:45.200781  790270 system_pods.go:61] "kube-proxy-55jwk" [cef9515a-0047-4058-95ce-18b2265f4a40] Running
	I1209 01:56:45.200785  790270 system_pods.go:61] "kube-scheduler-addons-520986" [a272e14e-90af-41e0-a5ba-45bd0d3467c6] Running
	I1209 01:56:45.200789  790270 system_pods.go:61] "metrics-server-85b7d694d7-6h6ks" [9933e398-1bd2-4f95-9968-ac571b18b98d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:45.200795  790270 system_pods.go:61] "nvidia-device-plugin-daemonset-fmfwp" [6680e716-57e7-4dac-bfc6-474c174bfa12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:45.200804  790270 system_pods.go:61] "registry-6b586f9694-vlvl7" [101e7e22-6338-450e-b175-a29aa66aa838] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:45.200809  790270 system_pods.go:61] "registry-creds-764b6fb674-srdn7" [566b01af-141e-4867-8fff-0b9a84525ab7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:45.200813  790270 system_pods.go:61] "registry-proxy-md9zq" [b449333e-cc2d-4741-a901-fdcbae2dbeeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:45.200821  790270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v4xmh" [5182815a-a54b-4cdf-bb5e-722920ab9087] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:45.200826  790270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vgbx2" [05d44bab-ebf0-4e4c-b9ff-0255e3c6f3ec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:45.200833  790270 system_pods.go:61] "storage-provisioner" [7ab5a70f-f2ad-4920-8048-ba19c19bed2d] Running
	I1209 01:56:45.200845  790270 system_pods.go:74] duration metric: took 79.018375ms to wait for pod list to return data ...
	I1209 01:56:45.200858  790270 default_sa.go:34] waiting for default service account to be created ...
	I1209 01:56:45.298596  790270 default_sa.go:45] found service account: "default"
	I1209 01:56:45.298630  790270 default_sa.go:55] duration metric: took 97.765728ms for default service account to be created ...
	I1209 01:56:45.298656  790270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 01:56:45.371360  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:45.371628  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:45.372009  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:45.372225  790270 system_pods.go:86] 20 kube-system pods found
	I1209 01:56:45.372249  790270 system_pods.go:89] "amd-gpu-device-plugin-465wk" [271974ca-7e3b-4c84-8934-f8e107aceaa3] Running
	I1209 01:56:45.372255  790270 system_pods.go:89] "coredns-66bc5c9577-j5w2c" [9e9c57dc-b6bd-42be-8a3b-f1e10a9fb863] Running
	I1209 01:56:45.372259  790270 system_pods.go:89] "coredns-66bc5c9577-qzn64" [85f77647-b009-4c5e-a48f-443611e37520] Running
	I1209 01:56:45.372266  790270 system_pods.go:89] "csi-hostpath-attacher-0" [16b4ff75-ad5f-4f79-9478-0a122848f9a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:45.372270  790270 system_pods.go:89] "csi-hostpath-resizer-0" [3a1a1237-31f7-4ca1-87a4-02b6d2387c27] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:45.372279  790270 system_pods.go:89] "csi-hostpathplugin-mznj5" [d90dc4bb-01fc-4ff5-9f29-33d2a8cd7c4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:45.372290  790270 system_pods.go:89] "etcd-addons-520986" [eff98cf5-6ef4-4096-9da0-f8f6eab8818b] Running
	I1209 01:56:45.372294  790270 system_pods.go:89] "kube-apiserver-addons-520986" [1c57a257-5404-4891-8de2-64d25b9280fb] Running
	I1209 01:56:45.372299  790270 system_pods.go:89] "kube-controller-manager-addons-520986" [4e29595d-3d0f-4985-a4f3-1b0b0061dbd5] Running
	I1209 01:56:45.372305  790270 system_pods.go:89] "kube-ingress-dns-minikube" [f2d2941a-a050-42ba-966c-f2a4c9f45ecf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:45.372308  790270 system_pods.go:89] "kube-proxy-55jwk" [cef9515a-0047-4058-95ce-18b2265f4a40] Running
	I1209 01:56:45.372311  790270 system_pods.go:89] "kube-scheduler-addons-520986" [a272e14e-90af-41e0-a5ba-45bd0d3467c6] Running
	I1209 01:56:45.372320  790270 system_pods.go:89] "metrics-server-85b7d694d7-6h6ks" [9933e398-1bd2-4f95-9968-ac571b18b98d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:45.372328  790270 system_pods.go:89] "nvidia-device-plugin-daemonset-fmfwp" [6680e716-57e7-4dac-bfc6-474c174bfa12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:45.372335  790270 system_pods.go:89] "registry-6b586f9694-vlvl7" [101e7e22-6338-450e-b175-a29aa66aa838] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:45.372342  790270 system_pods.go:89] "registry-creds-764b6fb674-srdn7" [566b01af-141e-4867-8fff-0b9a84525ab7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:45.372346  790270 system_pods.go:89] "registry-proxy-md9zq" [b449333e-cc2d-4741-a901-fdcbae2dbeeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:45.372359  790270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4xmh" [5182815a-a54b-4cdf-bb5e-722920ab9087] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:45.372364  790270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vgbx2" [05d44bab-ebf0-4e4c-b9ff-0255e3c6f3ec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:45.372368  790270 system_pods.go:89] "storage-provisioner" [7ab5a70f-f2ad-4920-8048-ba19c19bed2d] Running
	I1209 01:56:45.372378  790270 system_pods.go:126] duration metric: took 73.715957ms to wait for k8s-apps to be running ...
	I1209 01:56:45.372385  790270 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 01:56:45.372440  790270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 01:56:45.385804  790270 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-520986" context rescaled to 1 replicas
	I1209 01:56:45.829710  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:45.829959  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:45.830004  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:46.302641  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:46.303957  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:46.303958  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:46.851877  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:46.852022  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:46.856219  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:47.015149  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.984965311s)
	I1209 01:56:47.015203  790270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.953352972s)
	I1209 01:56:47.015242  790270 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.642776551s)
	I1209 01:56:47.015272  790270 system_svc.go:56] duration metric: took 1.642881096s WaitForService to wait for kubelet
	I1209 01:56:47.015288  790270 kubeadm.go:587] duration metric: took 16.268114878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 01:56:47.015319  790270 node_conditions.go:102] verifying NodePressure condition ...
	I1209 01:56:47.016197  790270 addons.go:495] Verifying addon gcp-auth=true in "addons-520986"
	I1209 01:56:47.017706  790270 out.go:179] * Verifying gcp-auth addon...
	I1209 01:56:47.019863  790270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 01:56:47.020318  790270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 01:56:47.020348  790270 node_conditions.go:123] node cpu capacity is 2
	I1209 01:56:47.020366  790270 node_conditions.go:105] duration metric: took 5.041253ms to run NodePressure ...
	I1209 01:56:47.020381  790270 start.go:242] waiting for startup goroutines ...
	I1209 01:56:47.027072  790270 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 01:56:47.027088  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:47.302404  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:47.303989  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:47.304224  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:47.523580  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:47.797838  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:47.797987  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:47.798771  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:48.024672  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:48.297643  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:48.298213  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:48.298870  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:48.524379  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:48.801341  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:48.801424  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:48.801709  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:49.025766  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:49.296509  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:49.297502  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:49.297799  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:49.525328  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:49.799771  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:49.800096  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:49.800697  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:50.024826  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:50.307822  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:50.307878  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:50.307930  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:50.526200  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:50.800842  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:50.801390  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:50.803241  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:51.025529  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:51.299671  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:51.299695  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:51.299957  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:51.522985  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:51.795111  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:51.797022  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:51.797802  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:52.026634  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:52.301718  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:52.305093  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:52.305412  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:52.524001  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:52.798487  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:52.798779  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:52.798938  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:53.023524  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:53.304561  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:53.304983  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:53.307362  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:53.524447  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:53.795481  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:53.796552  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:53.797605  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:54.025856  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:54.296669  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:54.297336  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:54.297366  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:54.523031  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:54.798213  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:54.798276  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:54.798285  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.024071  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:55.297927  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.300258  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:55.300521  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:55.527174  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:55.832282  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:55.832373  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.835643  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:56.024361  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:56.295199  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:56.296898  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:56.297483  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:56.524383  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:56.803076  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:56.803102  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:56.805410  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:57.024303  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:57.297244  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:57.298062  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:57.298062  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:57.523303  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:57.795741  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:57.799707  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:57.800538  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:58.109484  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:58.297332  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:58.297504  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:58.297662  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:58.523915  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:58.796746  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:58.796883  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:58.797756  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:59.024818  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:59.363098  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:59.363382  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:59.364545  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:59.539100  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:59.796571  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:59.796590  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:59.797589  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:00.028560  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:00.297913  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:00.297944  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:00.298220  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:00.524661  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:00.803936  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:00.805177  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:00.805254  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:01.024816  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:01.296159  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:01.296352  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:01.296644  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:01.524740  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:01.796303  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:01.796812  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:01.798716  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:02.023667  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:02.398732  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:02.400765  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:02.401338  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:02.524962  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:02.798464  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:02.798543  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:02.798821  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:03.030448  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:03.297517  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:03.298456  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:03.299481  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:03.524534  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:03.796859  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:03.797886  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:03.799881  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:04.024290  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:04.302649  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:04.302669  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:04.304353  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:04.528566  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:04.812456  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:04.812558  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:04.812682  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:05.048799  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:05.299832  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:05.325456  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:05.325609  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:05.523412  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:05.802437  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:05.805404  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:05.805647  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:06.028928  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:06.310307  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:06.310334  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:06.311787  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:06.524469  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:06.863833  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:06.863882  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:06.864156  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:07.032161  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:07.296805  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:07.296874  790270 kapi.go:107] duration metric: took 22.504683835s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 01:57:07.298411  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:07.524053  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:07.797791  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:07.798656  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:08.023885  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:08.297144  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:08.297306  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:08.523972  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:08.796738  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:08.798095  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:09.023094  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:09.296297  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:09.296897  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:09.527040  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:09.796437  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:09.797060  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:10.023735  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:10.297429  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:10.299624  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:10.524478  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:10.796302  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:10.798107  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:11.023403  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:11.296675  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:11.296685  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:11.638495  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:11.796102  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:11.797189  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:12.023453  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:12.299844  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:12.304308  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:12.533637  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:12.800113  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:12.806819  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:13.025102  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:13.297517  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:13.298413  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:13.524566  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:13.798770  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:13.798924  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:14.025416  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:14.299246  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:14.299249  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:14.522925  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:14.797535  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:14.798039  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:15.025808  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:15.334275  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:15.346993  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:15.523607  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:15.796887  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:15.799238  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:16.023694  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:16.297288  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:16.298855  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:16.527444  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:16.800308  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:16.803960  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:17.025306  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:17.296219  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:17.300381  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:17.523472  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:17.801291  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:17.803298  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:18.028565  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:18.295832  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:18.298355  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:18.523527  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:18.796327  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:18.796348  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:19.024947  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:19.305010  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:19.307474  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:19.524487  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:19.797305  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:19.797365  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:20.023568  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:20.299619  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:20.304026  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:20.523634  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:20.800591  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:20.801509  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:21.023827  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:21.298682  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:21.301278  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:21.524182  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:21.798298  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:21.799596  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:22.024485  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:22.297358  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:22.299075  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:22.527264  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:22.804864  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:22.805573  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:23.024566  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:23.299950  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:23.302857  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:23.541953  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:23.798122  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:23.798965  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:24.023793  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:24.304608  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:24.304981  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:24.524268  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:24.798401  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:24.799095  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:25.024002  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:25.552982  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:25.557543  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:25.562293  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:25.797868  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:25.799915  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:26.023846  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:26.298427  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:26.298779  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:26.523152  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:26.798003  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:26.799947  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:27.023556  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:27.295143  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:27.297442  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:27.524610  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:27.796012  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:27.797042  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:28.022825  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:28.297006  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:28.297044  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:28.523258  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:28.797609  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:28.799517  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:29.024096  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:29.303094  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:29.307055  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:29.524229  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:29.796848  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:29.800891  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:30.027794  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:30.296225  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:30.298397  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:30.524719  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:30.796543  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:30.797410  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:31.025613  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:31.295896  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:31.298057  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:31.523711  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:31.798749  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:31.799977  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:32.024682  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:32.298383  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:32.299047  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:32.523224  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:32.797505  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:32.799391  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:33.023871  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:33.297386  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:33.300040  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:33.526660  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:33.798950  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:33.799766  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:34.030953  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:34.440075  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:34.440333  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:34.545057  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:34.797820  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:34.798518  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:35.026665  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:35.298050  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:35.298087  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:35.529732  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:35.798384  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:35.800921  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:36.025107  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:36.297864  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:36.297911  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:36.523858  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:36.798755  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:36.801345  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:37.043033  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:37.297145  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:37.298561  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:37.523791  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:37.797819  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:37.799076  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:38.025047  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:38.297696  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:38.297912  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:38.522999  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:38.798715  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:38.799489  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:39.026833  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:39.298122  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:39.298735  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:39.523748  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:39.798656  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:39.800478  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:40.026952  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:40.304689  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:40.306282  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:40.525212  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:40.796575  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:40.796770  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:41.024586  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:41.295813  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:41.298016  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:41.523648  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:41.795738  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:41.796333  790270 kapi.go:107] duration metric: took 57.004173314s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 01:57:42.023996  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:42.297313  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:42.523679  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:42.796399  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:43.023603  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:43.296278  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:43.523399  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:43.795025  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:44.032306  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:44.298658  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:44.523870  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:44.797945  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:45.032714  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:45.300872  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:45.527870  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:45.811770  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:46.045440  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:46.297148  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:46.523832  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:46.796581  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:47.023324  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:47.296403  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:47.525947  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:47.796766  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:48.024265  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:48.294627  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:48.523656  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:49.045905  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:49.050470  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:49.296524  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:49.524794  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:49.796053  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:50.025010  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:50.295995  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:50.523735  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:50.795744  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:51.040619  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:51.300787  790270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:51.524631  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:51.798154  790270 kapi.go:107] duration metric: took 1m7.006948083s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 01:57:52.028016  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:52.553098  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:53.023624  790270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:53.522721  790270 kapi.go:107] duration metric: took 1m6.502854307s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 01:57:53.524145  790270 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-520986 cluster.
	I1209 01:57:53.525395  790270 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 01:57:53.526343  790270 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 01:57:53.527527  790270 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, inspektor-gadget, registry-creds, amd-gpu-device-plugin, storage-provisioner-rancher, ingress-dns, cloud-spanner, volcano, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1209 01:57:53.528763  790270 addons.go:530] duration metric: took 1m22.781576259s for enable addons: enabled=[nvidia-device-plugin storage-provisioner inspektor-gadget registry-creds amd-gpu-device-plugin storage-provisioner-rancher ingress-dns cloud-spanner volcano metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1209 01:57:53.528814  790270 start.go:247] waiting for cluster config update ...
	I1209 01:57:53.528842  790270 start.go:256] writing updated cluster config ...
	I1209 01:57:53.529150  790270 ssh_runner.go:195] Run: rm -f paused
	I1209 01:57:53.536902  790270 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 01:57:53.541010  790270 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j5w2c" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:53.546405  790270 pod_ready.go:94] pod "coredns-66bc5c9577-j5w2c" is "Ready"
	I1209 01:57:53.546433  790270 pod_ready.go:86] duration metric: took 5.395319ms for pod "coredns-66bc5c9577-j5w2c" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:53.548652  790270 pod_ready.go:83] waiting for pod "etcd-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:53.556910  790270 pod_ready.go:94] pod "etcd-addons-520986" is "Ready"
	I1209 01:57:53.556939  790270 pod_ready.go:86] duration metric: took 8.263896ms for pod "etcd-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:53.560280  790270 pod_ready.go:83] waiting for pod "kube-apiserver-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:53.566428  790270 pod_ready.go:94] pod "kube-apiserver-addons-520986" is "Ready"
	I1209 01:57:53.566452  790270 pod_ready.go:86] duration metric: took 6.146456ms for pod "kube-apiserver-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:53.568528  790270 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:53.941979  790270 pod_ready.go:94] pod "kube-controller-manager-addons-520986" is "Ready"
	I1209 01:57:53.942023  790270 pod_ready.go:86] duration metric: took 373.470419ms for pod "kube-controller-manager-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:54.149948  790270 pod_ready.go:83] waiting for pod "kube-proxy-55jwk" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:54.542054  790270 pod_ready.go:94] pod "kube-proxy-55jwk" is "Ready"
	I1209 01:57:54.542097  790270 pod_ready.go:86] duration metric: took 392.105036ms for pod "kube-proxy-55jwk" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:54.742233  790270 pod_ready.go:83] waiting for pod "kube-scheduler-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:55.140711  790270 pod_ready.go:94] pod "kube-scheduler-addons-520986" is "Ready"
	I1209 01:57:55.140747  790270 pod_ready.go:86] duration metric: took 398.475149ms for pod "kube-scheduler-addons-520986" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:55.140759  790270 pod_ready.go:40] duration metric: took 1.603807652s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 01:57:55.189030  790270 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 01:57:55.190941  790270 out.go:179] * Done! kubectl is now configured to use "addons-520986" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                       NAMESPACE
	39fe88ba71d2f       d4918ca78576a       4 minutes ago       Running             nginx                     0                   dbd8aabfaf48f       nginx                                     default
	8e9d1663fa24f       56cc512116c8f       5 minutes ago       Running             busybox                   0                   022924e7b1c19       busybox                                   default
	9b68209ea1dc6       e16d1e3a10667       6 minutes ago       Running             local-path-provisioner    0                   ccc510c62a230       local-path-provisioner-648f6765c9-hrjxd   local-path-storage
	2a56255a22665       d5e667c0f2bb6       7 minutes ago       Running             amd-gpu-device-plugin     0                   79c3913bccb20       amd-gpu-device-plugin-465wk               kube-system
	d553c7d0bef84       6e38f40d628db       7 minutes ago       Running             storage-provisioner       0                   d4b9d3be67780       storage-provisioner                       kube-system
	23174f258c854       52546a367cc9e       7 minutes ago       Running             coredns                   0                   ea664edffd935       coredns-66bc5c9577-j5w2c                  kube-system
	13a4fa31c190b       8aa150647e88a       7 minutes ago       Running             kube-proxy                0                   e1ec5b27c8fd6       kube-proxy-55jwk                          kube-system
	9c7b09226f81d       a5f569d49a979       7 minutes ago       Running             kube-apiserver            0                   afc469b9010cf       kube-apiserver-addons-520986              kube-system
	29e4423f1374f       88320b5498ff2       7 minutes ago       Running             kube-scheduler            0                   2c8f1e2b69c17       kube-scheduler-addons-520986              kube-system
	0548de094b66e       a3e246e9556e9       7 minutes ago       Running             etcd                      0                   8fbf81aa4719c       etcd-addons-520986                        kube-system
	a61c6f513a666       01e8bacf0f500       7 minutes ago       Running             kube-controller-manager   0                   7199f8af8ddcb       kube-controller-manager-addons-520986     kube-system
	
	
	==> containerd <==
	Dec 09 02:04:04 addons-520986 containerd[822]: time="2025-12-09T02:04:04.872664177Z" level=info msg="container event discarded" container=b6d8ae3444139c5165bf01f045fbbe8f81cec799b626d7fe5e2b641d32f954b7 type=CONTAINER_DELETED_EVENT
	Dec 09 02:04:08 addons-520986 containerd[822]: time="2025-12-09T02:04:08.798003780Z" level=info msg="container event discarded" container=79255f868ba3325e48a459be04f990494c570aa595b9b3778c4cd14d8671b2ed type=CONTAINER_CREATED_EVENT
	Dec 09 02:04:08 addons-520986 containerd[822]: time="2025-12-09T02:04:08.798052977Z" level=info msg="container event discarded" container=79255f868ba3325e48a459be04f990494c570aa595b9b3778c4cd14d8671b2ed type=CONTAINER_STARTED_EVENT
	Dec 09 02:04:09 addons-520986 containerd[822]: time="2025-12-09T02:04:09.499784148Z" level=info msg="container event discarded" container=27135839a9beaf5344da00f37181e96466f7209ba71851ae74af4b74fdfecbea type=CONTAINER_CREATED_EVENT
	Dec 09 02:04:09 addons-520986 containerd[822]: time="2025-12-09T02:04:09.592239431Z" level=info msg="container event discarded" container=27135839a9beaf5344da00f37181e96466f7209ba71851ae74af4b74fdfecbea type=CONTAINER_STARTED_EVENT
	Dec 09 02:04:09 addons-520986 containerd[822]: time="2025-12-09T02:04:09.660249552Z" level=info msg="container event discarded" container=27135839a9beaf5344da00f37181e96466f7209ba71851ae74af4b74fdfecbea type=CONTAINER_STOPPED_EVENT
	Dec 09 02:04:09 addons-520986 containerd[822]: time="2025-12-09T02:04:09.833974424Z" level=info msg="container event discarded" container=1dc578fd6ba60b648fde0ed7b6085c5a6a527d2340ad7bbc421c0d9e44393fd4 type=CONTAINER_STOPPED_EVENT
	Dec 09 02:04:09 addons-520986 containerd[822]: time="2025-12-09T02:04:09.934628923Z" level=info msg="container event discarded" container=92c6653ed758f6f0a75ab0f308c3ced18d738c23c526f387bdaeac8570c294f5 type=CONTAINER_STOPPED_EVENT
	Dec 09 02:04:10 addons-520986 containerd[822]: time="2025-12-09T02:04:10.641143934Z" level=info msg="container event discarded" container=4ee38fd6c723372bdd9c225f8e603ca77c8ae63ab6aa0103ad4896598f7e015b type=CONTAINER_STOPPED_EVENT
	Dec 09 02:04:10 addons-520986 containerd[822]: time="2025-12-09T02:04:10.724585548Z" level=info msg="container event discarded" container=6afc2b9c115a65b3ab545f380c592edae369d4baead9af5ea76407b410ff9ed1 type=CONTAINER_STOPPED_EVENT
	Dec 09 02:04:10 addons-520986 containerd[822]: time="2025-12-09T02:04:10.924620449Z" level=info msg="container event discarded" container=1dc578fd6ba60b648fde0ed7b6085c5a6a527d2340ad7bbc421c0d9e44393fd4 type=CONTAINER_DELETED_EVENT
	Dec 09 02:04:10 addons-520986 containerd[822]: time="2025-12-09T02:04:10.988522300Z" level=info msg="container event discarded" container=4ee38fd6c723372bdd9c225f8e603ca77c8ae63ab6aa0103ad4896598f7e015b type=CONTAINER_DELETED_EVENT
	Dec 09 02:04:11 addons-520986 containerd[822]: time="2025-12-09T02:04:11.093057406Z" level=info msg="container event discarded" container=79255f868ba3325e48a459be04f990494c570aa595b9b3778c4cd14d8671b2ed type=CONTAINER_STOPPED_EVENT
	Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.080606898Z" level=info msg="container event discarded" container=2e7265b84aa37590851169812c5ad9542f9f55876190c4bae4dd0d610bff6dea type=CONTAINER_STOPPED_EVENT
	Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.080703406Z" level=info msg="container event discarded" container=41522d71fff1759b9e18f13d7727d7d971af5b4377ee7e9e358ea3421b8e60d6 type=CONTAINER_STOPPED_EVENT
	Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.237639711Z" level=info msg="container event discarded" container=1f30f47f9902ce884e4554b111ab2e3305baaa88760959e2500c03a57244621d type=CONTAINER_STOPPED_EVENT
	Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.237680927Z" level=info msg="container event discarded" container=9d6277bdcab675adc40d925d9933916abe280d459fd67033f841014e806fbd38 type=CONTAINER_STOPPED_EVENT
	Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.489199634Z" level=info msg="container event discarded" container=ea4bbe7cdffb92f0c518f3506abdab64d336924630f8c26676805c2cdf7f6a00 type=CONTAINER_CREATED_EVENT
	Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.489248508Z" level=info msg="container event discarded" container=ea4bbe7cdffb92f0c518f3506abdab64d336924630f8c26676805c2cdf7f6a00 type=CONTAINER_STARTED_EVENT
	Dec 09 02:04:12 addons-520986 containerd[822]: time="2025-12-09T02:04:12.988870636Z" level=info msg="container event discarded" container=41522d71fff1759b9e18f13d7727d7d971af5b4377ee7e9e358ea3421b8e60d6 type=CONTAINER_DELETED_EVENT
	Dec 09 02:04:13 addons-520986 containerd[822]: time="2025-12-09T02:04:13.011399305Z" level=info msg="container event discarded" container=2e7265b84aa37590851169812c5ad9542f9f55876190c4bae4dd0d610bff6dea type=CONTAINER_DELETED_EVENT
	Dec 09 02:04:14 addons-520986 containerd[822]: time="2025-12-09T02:04:14.635140970Z" level=info msg="container event discarded" container=b1ad20e63555188b95fba57b3713a77cad5b110a358b306cd2a840294e9048f4 type=CONTAINER_CREATED_EVENT
	Dec 09 02:04:14 addons-520986 containerd[822]: time="2025-12-09T02:04:14.789575327Z" level=info msg="container event discarded" container=b1ad20e63555188b95fba57b3713a77cad5b110a358b306cd2a840294e9048f4 type=CONTAINER_STARTED_EVENT
	Dec 09 02:04:16 addons-520986 containerd[822]: time="2025-12-09T02:04:16.035858359Z" level=info msg="container event discarded" container=1be3d75e31bd14f4f16aeb37cde885b4468e193f452eab254b4da24b0c78ae62 type=CONTAINER_CREATED_EVENT
	Dec 09 02:04:16 addons-520986 containerd[822]: time="2025-12-09T02:04:16.035969205Z" level=info msg="container event discarded" container=1be3d75e31bd14f4f16aeb37cde885b4468e193f452eab254b4da24b0c78ae62 type=CONTAINER_STARTED_EVENT
	
	
	==> coredns [23174f258c8545002487a49e485ba48589d5696413c8722dade1feffb060a643] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] Reloading
	[INFO] 10.244.0.27:35550 - 31287 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000431095s
	[INFO] 10.244.0.27:45336 - 28415 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166577s
	[INFO] 10.244.0.27:49874 - 32896 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149568s
	[INFO] 10.244.0.27:40524 - 42867 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000210365s
	[INFO] 10.244.0.27:42040 - 44177 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009067s
	[INFO] 10.244.0.27:60282 - 4652 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000310362s
	[INFO] 10.244.0.27:40634 - 56275 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003974228s
	[INFO] 10.244.0.27:53329 - 39965 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002769225s
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.31:56459 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000519537s
	[INFO] 10.244.0.31:56979 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00016087s
	
	
	==> describe nodes <==
	Name:               addons-520986
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-520986
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=addons-520986
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T01_56_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-520986
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 01:56:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-520986
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:04:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:03:13 +0000   Tue, 09 Dec 2025 01:56:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:03:13 +0000   Tue, 09 Dec 2025 01:56:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:03:13 +0000   Tue, 09 Dec 2025 01:56:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:03:13 +0000   Tue, 09 Dec 2025 01:56:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    addons-520986
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1934cbe821945129b0272a0810d6e14
	  System UUID:                c1934cbe-8219-4512-9b02-72a0810d6e14
	  Boot ID:                    5ba98362-4ea2-4d36-99c2-b5350ef5a136
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.4
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  default                     hello-world-app-5d498dc89-w92fp                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 amd-gpu-device-plugin-465wk                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 coredns-66bc5c9577-j5w2c                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m46s
	  kube-system                 etcd-addons-520986                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m53s
	  kube-system                 kube-apiserver-addons-520986                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 kube-controller-manager-addons-520986                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 kube-proxy-55jwk                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 kube-scheduler-addons-520986                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m40s
	  local-path-storage          helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-hrjxd                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m44s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m58s (x8 over 7m58s)  kubelet          Node addons-520986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m58s (x8 over 7m58s)  kubelet          Node addons-520986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m58s (x7 over 7m58s)  kubelet          Node addons-520986 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m52s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m52s                  kubelet          Node addons-520986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m52s                  kubelet          Node addons-520986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m52s                  kubelet          Node addons-520986 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m51s                  kubelet          Node addons-520986 status is now: NodeReady
	  Normal  RegisteredNode           7m47s                  node-controller  Node addons-520986 event: Registered Node addons-520986 in Controller
	  Normal  CIDRAssignmentFailed     7m47s                  cidrAllocator    Node addons-520986 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +5.460097] kauditd_printk_skb: 107 callbacks suppressed
	[  +1.561461] kauditd_printk_skb: 106 callbacks suppressed
	[  +3.157294] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.033114] kauditd_printk_skb: 71 callbacks suppressed
	[  +3.434526] kauditd_printk_skb: 81 callbacks suppressed
	[  +0.000049] kauditd_printk_skb: 20 callbacks suppressed
	[  +4.821963] kauditd_printk_skb: 86 callbacks suppressed
	[Dec 9 01:58] kauditd_printk_skb: 89 callbacks suppressed
	[  +0.000027] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.897955] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.503662] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000051] kauditd_printk_skb: 68 callbacks suppressed
	[ +11.573145] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.928919] kauditd_printk_skb: 22 callbacks suppressed
	[Dec 9 01:59] kauditd_printk_skb: 64 callbacks suppressed
	[  +0.000070] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.318746] kauditd_printk_skb: 213 callbacks suppressed
	[  +0.769251] kauditd_printk_skb: 118 callbacks suppressed
	[  +3.703597] kauditd_printk_skb: 48 callbacks suppressed
	[  +3.154418] kauditd_printk_skb: 128 callbacks suppressed
	[  +1.399107] kauditd_printk_skb: 42 callbacks suppressed
	[Dec 9 02:01] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.000075] kauditd_printk_skb: 9 callbacks suppressed
	[Dec 9 02:03] kauditd_printk_skb: 26 callbacks suppressed
	[Dec 9 02:04] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [0548de094b66ecc7dc2fb8fd3cf315649e76d464976ed0add9b985fbfa64ae2d] <==
	{"level":"info","ts":"2025-12-09T01:57:25.543151Z","caller":"traceutil/trace.go:172","msg":"trace[1565615534] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1228; }","duration":"251.815281ms","start":"2025-12-09T01:57:25.291238Z","end":"2025-12-09T01:57:25.543053Z","steps":["trace[1565615534] 'agreement among raft nodes before linearized reading'  (duration: 251.485261ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:25.543565Z","caller":"traceutil/trace.go:172","msg":"trace[1358099045] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"314.343969ms","start":"2025-12-09T01:57:25.229202Z","end":"2025-12-09T01:57:25.543546Z","steps":["trace[1358099045] 'process raft request'  (duration: 29.245942ms)","trace[1358099045] 'compare'  (duration: 282.580623ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T01:57:25.543650Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T01:57:25.229122Z","time spent":"314.473992ms","remote":"127.0.0.1:46654","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2159,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/snapshot-controller-7d9fbc56b8\" mod_revision:1225 > success:<request_put:<key:\"/registry/replicasets/kube-system/snapshot-controller-7d9fbc56b8\" value_size:2087 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/snapshot-controller-7d9fbc56b8\" > >"}
	{"level":"warn","ts":"2025-12-09T01:57:25.546089Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"253.730919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:57:25.546491Z","caller":"traceutil/trace.go:172","msg":"trace[1451489977] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1228; }","duration":"254.133354ms","start":"2025-12-09T01:57:25.292348Z","end":"2025-12-09T01:57:25.546481Z","steps":["trace[1451489977] 'agreement among raft nodes before linearized reading'  (duration: 251.636899ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T01:57:34.422734Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.367248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:57:34.422783Z","caller":"traceutil/trace.go:172","msg":"trace[558088992] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1272; }","duration":"120.435042ms","start":"2025-12-09T01:57:34.302339Z","end":"2025-12-09T01:57:34.422774Z","steps":["trace[558088992] 'range keys from in-memory index tree'  (duration: 120.310553ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T01:57:34.423138Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"246.72969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:57:34.423163Z","caller":"traceutil/trace.go:172","msg":"trace[1716658586] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:1272; }","duration":"246.759885ms","start":"2025-12-09T01:57:34.176396Z","end":"2025-12-09T01:57:34.423156Z","steps":["trace[1716658586] 'range keys from in-memory index tree'  (duration: 246.660383ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T01:57:34.423369Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.841991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-8mws5\" limit:1 ","response":"range_response_count:1 size:3689"}
	{"level":"info","ts":"2025-12-09T01:57:34.423387Z","caller":"traceutil/trace.go:172","msg":"trace[1261994678] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-8mws5; range_end:; response_count:1; response_revision:1272; }","duration":"213.862435ms","start":"2025-12-09T01:57:34.209519Z","end":"2025-12-09T01:57:34.423382Z","steps":["trace[1261994678] 'range keys from in-memory index tree'  (duration: 213.733914ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T01:57:34.423772Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.187176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:57:34.423796Z","caller":"traceutil/trace.go:172","msg":"trace[1094916789] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1272; }","duration":"134.213608ms","start":"2025-12-09T01:57:34.289576Z","end":"2025-12-09T01:57:34.423790Z","steps":["trace[1094916789] 'range keys from in-memory index tree'  (duration: 133.964979ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T01:57:34.424090Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.470418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:57:34.424109Z","caller":"traceutil/trace.go:172","msg":"trace[798619159] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1272; }","duration":"134.491087ms","start":"2025-12-09T01:57:34.289613Z","end":"2025-12-09T01:57:34.424104Z","steps":["trace[798619159] 'range keys from in-memory index tree'  (duration: 134.426524ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:49.037514Z","caller":"traceutil/trace.go:172","msg":"trace[1705176302] linearizableReadLoop","detail":"{readStateIndex:1366; appliedIndex:1366; }","duration":"248.741277ms","start":"2025-12-09T01:57:48.788730Z","end":"2025-12-09T01:57:49.037471Z","steps":["trace[1705176302] 'read index received'  (duration: 248.733409ms)","trace[1705176302] 'applied index is now lower than readState.Index'  (duration: 7.116µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T01:57:49.037846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"249.064888ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:57:49.037991Z","caller":"traceutil/trace.go:172","msg":"trace[1815560509] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1336; }","duration":"249.253975ms","start":"2025-12-09T01:57:48.788726Z","end":"2025-12-09T01:57:49.037980Z","steps":["trace[1815560509] 'agreement among raft nodes before linearized reading'  (duration: 248.991154ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:49.040361Z","caller":"traceutil/trace.go:172","msg":"trace[517498940] transaction","detail":"{read_only:false; response_revision:1337; number_of_response:1; }","duration":"279.546135ms","start":"2025-12-09T01:57:48.760800Z","end":"2025-12-09T01:57:49.040346Z","steps":["trace[517498940] 'process raft request'  (duration: 277.441241ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:55.943518Z","caller":"traceutil/trace.go:172","msg":"trace[682111699] transaction","detail":"{read_only:false; response_revision:1402; number_of_response:1; }","duration":"115.266343ms","start":"2025-12-09T01:57:55.828236Z","end":"2025-12-09T01:57:55.943503Z","steps":["trace[682111699] 'process raft request'  (duration: 115.175095ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:58:28.666620Z","caller":"traceutil/trace.go:172","msg":"trace[2090950996] linearizableReadLoop","detail":"{readStateIndex:1557; appliedIndex:1557; }","duration":"110.477036ms","start":"2025-12-09T01:58:28.556119Z","end":"2025-12-09T01:58:28.666596Z","steps":["trace[2090950996] 'read index received'  (duration: 110.470443ms)","trace[2090950996] 'applied index is now lower than readState.Index'  (duration: 5.583µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T01:58:28.666795Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.600584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:58:28.666825Z","caller":"traceutil/trace.go:172","msg":"trace[648922672] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1517; }","duration":"110.703044ms","start":"2025-12-09T01:58:28.556115Z","end":"2025-12-09T01:58:28.666818Z","steps":["trace[648922672] 'agreement among raft nodes before linearized reading'  (duration: 110.571046ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:58:28.670363Z","caller":"traceutil/trace.go:172","msg":"trace[2043086108] transaction","detail":"{read_only:false; response_revision:1518; number_of_response:1; }","duration":"157.908886ms","start":"2025-12-09T01:58:28.512443Z","end":"2025-12-09T01:58:28.670352Z","steps":["trace[2043086108] 'process raft request'  (duration: 157.021003ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:59:02.353373Z","caller":"traceutil/trace.go:172","msg":"trace[1399644659] transaction","detail":"{read_only:false; response_revision:1797; number_of_response:1; }","duration":"114.080503ms","start":"2025-12-09T01:59:02.239278Z","end":"2025-12-09T01:59:02.353358Z","steps":["trace[1399644659] 'process raft request'  (duration: 113.989135ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:04:16 up 8 min,  0 users,  load average: 0.40, 0.84, 0.63
	Linux addons-520986 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9c7b09226f81d86031e8765a14cb5e7f3340dc7dcf392d7c18885cbf0c616449] <==
	W1209 01:58:32.259767       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1209 01:58:48.021039       1 conn.go:339] Error on socket receive: read tcp 192.168.39.56:8443->192.168.39.1:60898: use of closed network connection
	E1209 01:58:48.211398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.56:8443->192.168.39.1:60934: use of closed network connection
	I1209 01:58:57.968330       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.83.165"}
	I1209 01:59:08.054171       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1209 01:59:18.811791       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 01:59:19.025175       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.207.205"}
	I1209 01:59:21.525277       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 01:59:26.558074       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.57.27"}
	E1209 01:59:29.233732       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E1209 01:59:30.338983       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E1209 01:59:30.346217       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I1209 01:59:37.973182       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 01:59:37.973299       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 01:59:38.011099       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 01:59:38.011531       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 01:59:38.013992       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 01:59:38.015064       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 01:59:38.041722       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 01:59:38.041775       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 01:59:38.076616       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 01:59:38.076668       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 01:59:39.014776       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 01:59:39.076646       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1209 01:59:39.096034       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [a61c6f513a666739d8ddd4935782e2124b1330e63d6b746c6c732fb533713dfd] <==
	E1209 02:03:27.246310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:03:30.342054       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:03:30.344221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:03:38.014546       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:03:38.015725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:03:43.493982       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:03:43.495665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:03:47.703991       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:03:47.705329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:03:49.884858       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:03:49.886167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:03:55.131440       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:03:55.132837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:03:58.597121       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:03:58.598851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:03:58.675841       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:03:58.678537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:03:59.210135       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:03:59.211821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:04:00.477364       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:04:00.478748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:04:03.743646       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:04:03.745181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:04:11.766144       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:04:11.767532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [13a4fa31c190baf19fe2eb0e8e3df418a4708d5340d079b6d6b362a34e8642fc] <==
	I1209 01:56:31.331467       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 01:56:31.433451       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 01:56:31.433489       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.56"]
	E1209 01:56:31.433553       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 01:56:31.540193       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 01:56:31.540360       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 01:56:31.540437       1 server_linux.go:132] "Using iptables Proxier"
	I1209 01:56:31.574581       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 01:56:31.575250       1 server.go:527] "Version info" version="v1.34.2"
	I1209 01:56:31.575278       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 01:56:31.580608       1 config.go:200] "Starting service config controller"
	I1209 01:56:31.580639       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 01:56:31.580662       1 config.go:106] "Starting endpoint slice config controller"
	I1209 01:56:31.580666       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 01:56:31.580675       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 01:56:31.580678       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 01:56:31.581713       1 config.go:309] "Starting node config controller"
	I1209 01:56:31.587626       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 01:56:31.600003       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 01:56:31.680768       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 01:56:31.680793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 01:56:31.680840       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [29e4423f1374f3418ef328d00e60cea3e5564243eb93157de099662229393316] <==
	E1209 01:56:22.198590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 01:56:22.199263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 01:56:22.199155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1209 01:56:22.199479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 01:56:22.199562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 01:56:22.199706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 01:56:22.199852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 01:56:22.199887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 01:56:22.199963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 01:56:22.199967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 01:56:22.200177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 01:56:22.199976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 01:56:23.011765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1209 01:56:23.073602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 01:56:23.075750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 01:56:23.100842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 01:56:23.136471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 01:56:23.197696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 01:56:23.223284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 01:56:23.376492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 01:56:23.380609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 01:56:23.385845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 01:56:23.455267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 01:56:23.621332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1209 01:56:25.681986       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 02:03:32 addons-520986 kubelet[1518]: I1209 02:03:32.016635    1518 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5fa217e3-a5b5-4e25-a29f-62df9665dd23-data\") on node \"addons-520986\" DevicePath \"\""
	Dec 09 02:03:32 addons-520986 kubelet[1518]: I1209 02:03:32.016693    1518 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8g87c\" (UniqueName: \"kubernetes.io/projected/5fa217e3-a5b5-4e25-a29f-62df9665dd23-kube-api-access-8g87c\") on node \"addons-520986\" DevicePath \"\""
	Dec 09 02:03:32 addons-520986 kubelet[1518]: I1209 02:03:32.016707    1518 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5fa217e3-a5b5-4e25-a29f-62df9665dd23-script\") on node \"addons-520986\" DevicePath \"\""
	Dec 09 02:03:32 addons-520986 kubelet[1518]: I1209 02:03:32.734734    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-465wk" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 02:03:32 addons-520986 kubelet[1518]: I1209 02:03:32.739283    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fa217e3-a5b5-4e25-a29f-62df9665dd23" path="/var/lib/kubelet/pods/5fa217e3-a5b5-4e25-a29f-62df9665dd23/volumes"
	Dec 09 02:03:37 addons-520986 kubelet[1518]: E1209 02:03:37.736715    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:1.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-w92fp" podUID="3696685f-64cf-4c4c-b75b-aa7a4392f328"
	Dec 09 02:03:49 addons-520986 kubelet[1518]: E1209 02:03:49.736262    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:1.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-w92fp" podUID="3696685f-64cf-4c4c-b75b-aa7a4392f328"
	Dec 09 02:04:01 addons-520986 kubelet[1518]: I1209 02:04:01.830531    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/8f8c48dc-9016-410e-aeaa-f33b2f768570-data\") pod \"helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12\" (UID: \"8f8c48dc-9016-410e-aeaa-f33b2f768570\") " pod="local-path-storage/helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12"
	Dec 09 02:04:01 addons-520986 kubelet[1518]: I1209 02:04:01.830591    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/8f8c48dc-9016-410e-aeaa-f33b2f768570-script\") pod \"helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12\" (UID: \"8f8c48dc-9016-410e-aeaa-f33b2f768570\") " pod="local-path-storage/helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12"
	Dec 09 02:04:01 addons-520986 kubelet[1518]: I1209 02:04:01.830614    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxqr\" (UniqueName: \"kubernetes.io/projected/8f8c48dc-9016-410e-aeaa-f33b2f768570-kube-api-access-lxxqr\") pod \"helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12\" (UID: \"8f8c48dc-9016-410e-aeaa-f33b2f768570\") " pod="local-path-storage/helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12"
	Dec 09 02:04:03 addons-520986 kubelet[1518]: E1209 02:04:03.218563    1518 log.go:32] "PullImage from image service failed" err=<
	Dec 09 02:04:03 addons-520986 kubelet[1518]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
	Dec 09 02:04:03 addons-520986 kubelet[1518]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 09 02:04:03 addons-520986 kubelet[1518]:  > image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 09 02:04:03 addons-520986 kubelet[1518]: E1209 02:04:03.218655    1518 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 09 02:04:03 addons-520986 kubelet[1518]:         failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
	Dec 09 02:04:03 addons-520986 kubelet[1518]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 09 02:04:03 addons-520986 kubelet[1518]:  > image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 09 02:04:03 addons-520986 kubelet[1518]: E1209 02:04:03.218803    1518 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Dec 09 02:04:03 addons-520986 kubelet[1518]:         container helper-pod start failed in pod helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12_local-path-storage(8f8c48dc-9016-410e-aeaa-f33b2f768570): ErrImagePull: failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
	Dec 09 02:04:03 addons-520986 kubelet[1518]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 09 02:04:03 addons-520986 kubelet[1518]:  > logger="UnhandledError"
	Dec 09 02:04:03 addons-520986 kubelet[1518]: E1209 02:04:03.218942    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12" podUID="8f8c48dc-9016-410e-aeaa-f33b2f768570"
	Dec 09 02:04:03 addons-520986 kubelet[1518]: E1209 02:04:03.735804    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:1.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-w92fp" podUID="3696685f-64cf-4c4c-b75b-aa7a4392f328"
	Dec 09 02:04:04 addons-520986 kubelet[1518]: E1209 02:04:04.201183    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12" podUID="8f8c48dc-9016-410e-aeaa-f33b2f768570"
	
	
	==> storage-provisioner [d553c7d0bef844770979140bf5e5ea6d82b220e7fee4521ae8638b08b55ed34b] <==
	W1209 02:03:51.510045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:03:53.515136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:03:53.522757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:03:55.526180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:03:55.532245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:03:57.536227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:03:57.544175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:03:59.550778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:03:59.557669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:01.561337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:01.570054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:03.573544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:03.578695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:05.582746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:05.590801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:07.594476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:07.600678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:09.605422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:09.611131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:11.614640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:11.620463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:13.625982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:13.633430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:15.637218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:04:15.643106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-520986 -n addons-520986
helpers_test.go:269: (dbg) Run:  kubectl --context addons-520986 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-w92fp test-local-path helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-520986 describe pod hello-world-app-5d498dc89-w92fp test-local-path helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-520986 describe pod hello-world-app-5d498dc89-w92fp test-local-path helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12: exit status 1 (77.518512ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-w92fp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-520986/192.168.39.56
	Start Time:       Tue, 09 Dec 2025 01:59:26 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.35
	IPs:
	  IP:           10.244.0.35
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fjbmf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fjbmf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m51s                 default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-w92fp to addons-520986
	  Normal   Pulling    118s (x5 over 4m50s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     117s (x5 over 4m50s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": failed to pull and unpack image "docker.io/kicbase/echo-server:1.0": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   117s (x5 over 4m50s)  kubelet  Error: ErrImagePull
	  Warning  Failed   55s (x15 over 4m49s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  1s (x19 over 4m49s)   kubelet  Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8j7c7 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-8j7c7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-520986 describe pod hello-world-app-5d498dc89-w92fp test-local-path helper-pod-create-pvc-8715d544-6067-4ebe-abfc-382357e7ff12: exit status 1
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.805409369s)
--- FAIL: TestAddons/parallel/LocalPath (344.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-230202 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-230202 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-230202 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-230202 --alsologtostderr -v=1] stderr:
I1209 02:19:41.643477  802981 out.go:360] Setting OutFile to fd 1 ...
I1209 02:19:41.643796  802981 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:19:41.643812  802981 out.go:374] Setting ErrFile to fd 2...
I1209 02:19:41.643819  802981 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:19:41.644147  802981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:19:41.644499  802981 mustload.go:66] Loading cluster: functional-230202
I1209 02:19:41.644888  802981 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:19:41.646887  802981 host.go:66] Checking if "functional-230202" exists ...
I1209 02:19:41.647119  802981 api_server.go:166] Checking apiserver status ...
I1209 02:19:41.647192  802981 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 02:19:41.649521  802981 main.go:143] libmachine: domain functional-230202 has defined MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:19:41.649932  802981 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:54:51", ip: ""} in network mk-functional-230202: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:58 +0000 UTC Type:0 Mac:52:54:00:44:54:51 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-230202 Clientid:01:52:54:00:44:54:51}
I1209 02:19:41.649969  802981 main.go:143] libmachine: domain functional-230202 has defined IP address 192.168.39.49 and MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:19:41.650150  802981 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-230202/id_rsa Username:docker}
I1209 02:19:41.755856  802981 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5505/cgroup
W1209 02:19:41.771055  802981 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5505/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1209 02:19:41.771121  802981 ssh_runner.go:195] Run: ls
I1209 02:19:41.779252  802981 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8441/healthz ...
I1209 02:19:41.785609  802981 api_server.go:279] https://192.168.39.49:8441/healthz returned 200:
ok
W1209 02:19:41.785675  802981 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1209 02:19:41.785892  802981 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:19:41.785914  802981 addons.go:70] Setting dashboard=true in profile "functional-230202"
I1209 02:19:41.785931  802981 addons.go:239] Setting addon dashboard=true in "functional-230202"
I1209 02:19:41.785962  802981 host.go:66] Checking if "functional-230202" exists ...
I1209 02:19:41.788830  802981 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1209 02:19:41.790030  802981 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1209 02:19:41.790935  802981 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1209 02:19:41.790954  802981 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1209 02:19:41.794184  802981 main.go:143] libmachine: domain functional-230202 has defined MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:19:41.794640  802981 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:54:51", ip: ""} in network mk-functional-230202: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:58 +0000 UTC Type:0 Mac:52:54:00:44:54:51 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-230202 Clientid:01:52:54:00:44:54:51}
I1209 02:19:41.794668  802981 main.go:143] libmachine: domain functional-230202 has defined IP address 192.168.39.49 and MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:19:41.794905  802981 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-230202/id_rsa Username:docker}
I1209 02:19:41.932964  802981 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1209 02:19:41.932993  802981 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1209 02:19:41.955989  802981 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1209 02:19:41.956040  802981 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1209 02:19:41.981569  802981 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1209 02:19:41.981597  802981 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1209 02:19:42.007786  802981 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1209 02:19:42.007817  802981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1209 02:19:42.046768  802981 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1209 02:19:42.046800  802981 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1209 02:19:42.071037  802981 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1209 02:19:42.071072  802981 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1209 02:19:42.091987  802981 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1209 02:19:42.092021  802981 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1209 02:19:42.113715  802981 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1209 02:19:42.113741  802981 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1209 02:19:42.140604  802981 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1209 02:19:42.140631  802981 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1209 02:19:42.188547  802981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1209 02:19:43.201092  802981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.012404326s)
I1209 02:19:43.202840  802981 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-230202 addons enable metrics-server

                                                
                                                
I1209 02:19:43.204251  802981 addons.go:202] Writing out "functional-230202" config to set dashboard=true...
W1209 02:19:43.205059  802981 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1209 02:19:43.205997  802981 kapi.go:59] client config for functional-230202: &rest.Config{Host:"https://192.168.39.49:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.key", CAFile:"/home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28162e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1209 02:19:43.206502  802981 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1209 02:19:43.206518  802981 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1209 02:19:43.206523  802981 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1209 02:19:43.206538  802981 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1209 02:19:43.206544  802981 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1209 02:19:43.223721  802981 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  b70de8a7-2eac-42b9-904d-ee9fea2b8e98 757 0 2025-12-09 02:19:43 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-09 02:19:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.229.3,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.229.3],IPFamilies:[IPv4],AllocateLoadBalancerNod
ePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1209 02:19:43.223882  802981 out.go:285] * Launching proxy ...
* Launching proxy ...
I1209 02:19:43.223959  802981 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-230202 proxy --port 36195]
I1209 02:19:43.224351  802981 dashboard.go:159] Waiting for kubectl to output host:port ...
I1209 02:19:43.270886  802981 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1209 02:19:43.270945  802981 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1209 02:19:43.287346  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b62b4b41-7104-451b-a3d0-049aaa0828a5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc001510d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ce500 TLS:<nil>}
I1209 02:19:43.287452  802981 retry.go:31] will retry after 75.156µs: Temporary Error: unexpected response code: 503
I1209 02:19:43.295026  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9ba23566-87bb-4306-906d-9ccd1b41b80a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc001510e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003003c0 TLS:<nil>}
I1209 02:19:43.295087  802981 retry.go:31] will retry after 146.233µs: Temporary Error: unexpected response code: 503
I1209 02:19:43.303996  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf519686-7fca-4758-88ec-77a186477294] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc001510ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000300500 TLS:<nil>}
I1209 02:19:43.304172  802981 retry.go:31] will retry after 159.12µs: Temporary Error: unexpected response code: 503
I1209 02:19:43.311882  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[00715e6a-035f-44e1-8c3f-c159860fde9c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc001510fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000300780 TLS:<nil>}
I1209 02:19:43.311945  802981 retry.go:31] will retry after 196.615µs: Temporary Error: unexpected response code: 503
I1209 02:19:43.318399  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3efb871a-8b67-44c1-b4df-c8bf65c2cfac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc00168ef80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003008c0 TLS:<nil>}
I1209 02:19:43.318471  802981 retry.go:31] will retry after 712.607µs: Temporary Error: unexpected response code: 503
I1209 02:19:43.338484  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6777f9f5-ef3c-4fd9-8a00-051ebe3666d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc0007ad080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ce640 TLS:<nil>}
I1209 02:19:43.338583  802981 retry.go:31] will retry after 545.598µs: Temporary Error: unexpected response code: 503
I1209 02:19:43.343121  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43fd567d-22b4-4855-9a64-8eee32bf65dd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc00168f080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045a640 TLS:<nil>}
I1209 02:19:43.343235  802981 retry.go:31] will retry after 1.222003ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.356785  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88376d4e-6708-4989-83a4-5860deeb64ce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc0007ad180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ce8c0 TLS:<nil>}
I1209 02:19:43.356861  802981 retry.go:31] will retry after 1.243257ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.366737  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac94f792-6edb-4d24-acd6-7243c2090fdc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc0015110c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045a780 TLS:<nil>}
I1209 02:19:43.366811  802981 retry.go:31] will retry after 3.803241ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.373514  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ae94d6ab-6bae-4e91-91d5-3db113cc8693] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc0007ad280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000300a00 TLS:<nil>}
I1209 02:19:43.373570  802981 retry.go:31] will retry after 2.092099ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.380397  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[410744d1-8f4c-4c16-8216-5df13f42a7b5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc001511200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045aa00 TLS:<nil>}
I1209 02:19:43.380478  802981 retry.go:31] will retry after 3.363669ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.390829  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c04e2b6c-6418-4839-b8e5-44334bb845fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc00168f180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000300b40 TLS:<nil>}
I1209 02:19:43.390943  802981 retry.go:31] will retry after 10.992029ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.406264  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[27438119-722e-4254-84ff-fabeac63e6fb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc0007ad340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cea00 TLS:<nil>}
I1209 02:19:43.406332  802981 retry.go:31] will retry after 10.100155ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.422738  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[27ddc7ba-f78f-4266-b62a-ffc5e84c6649] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc001511300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045ab40 TLS:<nil>}
I1209 02:19:43.422808  802981 retry.go:31] will retry after 20.428188ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.447410  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1159270a-b446-4c9a-b9da-0964ac17531e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc00168f280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000300c80 TLS:<nil>}
I1209 02:19:43.447479  802981 retry.go:31] will retry after 42.187839ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.495259  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8e965b32-9474-4eb9-b023-06a81eb49d1a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc0007ad440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ceb40 TLS:<nil>}
I1209 02:19:43.495323  802981 retry.go:31] will retry after 51.469719ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.552451  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[050e8bbb-dc40-4fc5-9ebb-9a7287e353fc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc0007ad540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045ac80 TLS:<nil>}
I1209 02:19:43.552542  802981 retry.go:31] will retry after 41.215123ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.600100  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6bc5489d-05db-4528-83a5-f5866bfa5265] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc00168f380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045adc0 TLS:<nil>}
I1209 02:19:43.600181  802981 retry.go:31] will retry after 90.623056ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.695053  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7803d3d-1a34-4884-9cba-f1f390bdaf73] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc001511400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cec80 TLS:<nil>}
I1209 02:19:43.695123  802981 retry.go:31] will retry after 213.085827ms: Temporary Error: unexpected response code: 503
I1209 02:19:43.911642  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[af813984-1d2e-4e7a-88e6-405cc92180df] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:43 GMT]] Body:0xc00168f480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000300dc0 TLS:<nil>}
I1209 02:19:43.911715  802981 retry.go:31] will retry after 144.201722ms: Temporary Error: unexpected response code: 503
I1209 02:19:44.059505  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[97ba4b49-f55a-4986-b5fb-d328b48433e7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:44 GMT]] Body:0xc001511480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cedc0 TLS:<nil>}
I1209 02:19:44.059599  802981 retry.go:31] will retry after 467.274809ms: Temporary Error: unexpected response code: 503
I1209 02:19:44.529806  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[31ecbe4c-7ee8-4eb1-95c5-a81a703af5c5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:44 GMT]] Body:0xc00168f540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000300f00 TLS:<nil>}
I1209 02:19:44.529885  802981 retry.go:31] will retry after 541.628086ms: Temporary Error: unexpected response code: 503
I1209 02:19:45.074931  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b5011e79-78e3-4c21-98b7-b3f6141e211c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:45 GMT]] Body:0xc0007ad740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cef00 TLS:<nil>}
I1209 02:19:45.074998  802981 retry.go:31] will retry after 947.710961ms: Temporary Error: unexpected response code: 503
I1209 02:19:46.026511  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b200150-fd14-4d21-b6c0-695c302c01f4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:46 GMT]] Body:0xc001511580 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045b040 TLS:<nil>}
I1209 02:19:46.026612  802981 retry.go:31] will retry after 885.826957ms: Temporary Error: unexpected response code: 503
I1209 02:19:46.915789  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6439aafc-96bf-4124-a9ca-1e0bc271e04d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:46 GMT]] Body:0xc001511640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000301040 TLS:<nil>}
I1209 02:19:46.915871  802981 retry.go:31] will retry after 868.106829ms: Temporary Error: unexpected response code: 503
I1209 02:19:47.787878  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[63818c78-858e-4f84-aac1-7eea07230535] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:47 GMT]] Body:0xc0007ad840 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000301180 TLS:<nil>}
I1209 02:19:47.787994  802981 retry.go:31] will retry after 2.33425311s: Temporary Error: unexpected response code: 503
I1209 02:19:50.126818  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f508aa87-f278-4a60-bc6f-a3766e43259b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:50 GMT]] Body:0xc00168f680 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045b180 TLS:<nil>}
I1209 02:19:50.126906  802981 retry.go:31] will retry after 5.060037827s: Temporary Error: unexpected response code: 503
I1209 02:19:55.193418  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[21ec0910-1678-4a8a-84da-cdbab201ac18] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:19:55 GMT]] Body:0xc0007ad940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045b2c0 TLS:<nil>}
I1209 02:19:55.193527  802981 retry.go:31] will retry after 5.717264349s: Temporary Error: unexpected response code: 503
I1209 02:20:00.920185  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d389ef74-ae16-4322-a7b7-d5ecc5488568] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:20:00 GMT]] Body:0xc00168f740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045b400 TLS:<nil>}
I1209 02:20:00.920256  802981 retry.go:31] will retry after 6.474607711s: Temporary Error: unexpected response code: 503
I1209 02:20:07.398429  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4c03e8ae-a47c-4e83-b54b-4a1eec406f87] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:20:07 GMT]] Body:0xc0007ada80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045b540 TLS:<nil>}
I1209 02:20:07.398524  802981 retry.go:31] will retry after 8.10305816s: Temporary Error: unexpected response code: 503
I1209 02:20:15.507923  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7e3ed2f0-4d91-4c87-ae67-72b38788353a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:20:15 GMT]] Body:0xc00168f7c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045b680 TLS:<nil>}
I1209 02:20:15.507987  802981 retry.go:31] will retry after 13.603512045s: Temporary Error: unexpected response code: 503
I1209 02:20:29.116921  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[087390aa-8560-41ee-bb92-831fdd5b8202] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:20:29 GMT]] Body:0xc00168f840 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003012c0 TLS:<nil>}
I1209 02:20:29.117010  802981 retry.go:31] will retry after 40.302250348s: Temporary Error: unexpected response code: 503
I1209 02:21:09.424198  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6520c4ca-1b29-4aa6-ac20-4adb9e1e82cd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:21:09 GMT]] Body:0xc001511800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045b7c0 TLS:<nil>}
I1209 02:21:09.424281  802981 retry.go:31] will retry after 58.729136766s: Temporary Error: unexpected response code: 503
I1209 02:22:08.157488  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8011b3ee-0710-4d7b-af16-4714f5c32a14] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:22:08 GMT]] Body:0xc00168e0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ce3c0 TLS:<nil>}
I1209 02:22:08.157598  802981 retry.go:31] will retry after 1m10.158393863s: Temporary Error: unexpected response code: 503
I1209 02:23:18.321232  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[349a5b1a-786a-4ab1-9f38-c47e3cdea715] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:23:18 GMT]] Body:0xc00168e140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045a280 TLS:<nil>}
I1209 02:23:18.321334  802981 retry.go:31] will retry after 1m14.480436615s: Temporary Error: unexpected response code: 503
I1209 02:24:32.806346  802981 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[18ed1e45-c61a-4853-81e6-67251416b707] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:24:32 GMT]] Body:0xc0007ac040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cf040 TLS:<nil>}
I1209 02:24:32.806447  802981 retry.go:31] will retry after 1m0.04980814s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-230202 -n functional-230202
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-230202 logs -n 25: (1.396807066s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-230202 update-context --alsologtostderr -v=2                                                                                             │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ update-context │ functional-230202 update-context --alsologtostderr -v=2                                                                                             │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ image          │ functional-230202 image ls --format short --alsologtostderr                                                                                         │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ image          │ functional-230202 image ls --format yaml --alsologtostderr                                                                                          │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-230202 ssh pgrep buildkitd                                                                                                               │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ image          │ functional-230202 image build -t localhost/my-image:functional-230202 testdata/build --alsologtostderr                                              │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-230202 ssh stat /mount-9p/created-by-test                                                                                                │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-230202 ssh stat /mount-9p/created-by-pod                                                                                                 │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-230202 ssh sudo umount -f /mount-9p                                                                                                      │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ mount          │ -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4119700011/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ ssh            │ functional-230202 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ image          │ functional-230202 image ls                                                                                                                          │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ image          │ functional-230202 image ls --format json --alsologtostderr                                                                                          │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ image          │ functional-230202 image ls --format table --alsologtostderr                                                                                         │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-230202 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-230202 ssh -- ls -la /mount-9p                                                                                                           │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-230202 ssh sudo umount -f /mount-9p                                                                                                      │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ mount          │ -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4092420648/001:/mount2 --alsologtostderr -v=1                │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ ssh            │ functional-230202 ssh findmnt -T /mount1                                                                                                            │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ mount          │ -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4092420648/001:/mount3 --alsologtostderr -v=1                │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ mount          │ -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4092420648/001:/mount1 --alsologtostderr -v=1                │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ ssh            │ functional-230202 ssh findmnt -T /mount1                                                                                                            │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-230202 ssh findmnt -T /mount2                                                                                                            │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-230202 ssh findmnt -T /mount3                                                                                                            │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ mount          │ -p functional-230202 --kill=true                                                                                                                    │ functional-230202 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:19:41
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:19:41.514951  802954 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:19:41.515269  802954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:19:41.515280  802954 out.go:374] Setting ErrFile to fd 2...
	I1209 02:19:41.515287  802954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:19:41.515598  802954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 02:19:41.516078  802954 out.go:368] Setting JSON to false
	I1209 02:19:41.517056  802954 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":28931,"bootTime":1765217850,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:19:41.517117  802954 start.go:143] virtualization: kvm guest
	I1209 02:19:41.518848  802954 out.go:179] * [functional-230202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:19:41.519969  802954 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:19:41.520015  802954 notify.go:221] Checking for updates...
	I1209 02:19:41.522289  802954 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:19:41.523531  802954 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 02:19:41.524683  802954 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 02:19:41.526842  802954 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:19:41.527961  802954 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:19:41.529535  802954 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 02:19:41.530233  802954 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:19:41.569884  802954 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 02:19:41.571310  802954 start.go:309] selected driver: kvm2
	I1209 02:19:41.571325  802954 start.go:927] validating driver "kvm2" against &{Name:functional-230202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-230202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:19:41.571447  802954 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:19:41.572744  802954 cni.go:84] Creating CNI manager for ""
	I1209 02:19:41.572831  802954 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1209 02:19:41.572880  802954 start.go:353] cluster config:
	{Name:functional-230202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-230202 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:19:41.575668  802954 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3e2af7d43d0de       56cc512116c8f       4 minutes ago       Exited              mount-munger              0                   cf511ffa2527d       busybox-mount                               default
	4a3dea9d1d34d       20d0be4ee4524       4 minutes ago       Running             mysql                     0                   73ede76fb04be       mysql-7d7b65bc95-6rb2c                      default
	22fc1f0daafc6       d4918ca78576a       4 minutes ago       Running             myfrontend                0                   171c6dd5f3285       sp-pod                                      default
	8df9b74040f59       9056ab77afb8e       4 minutes ago       Running             echo-server               0                   797a0dbad3d10       hello-node-connect-9f67c86d4-4pf8j          default
	85ed9e9ebc3eb       9056ab77afb8e       5 minutes ago       Running             echo-server               0                   2cce944d19e87       hello-node-5758569b79-xgbzb                 default
	014c37675731b       aa9d02839d8de       5 minutes ago       Running             kube-apiserver            1                   e09d1278eb825       kube-apiserver-functional-230202            kube-system
	00a49560fa234       aa5e3ebc0dfed       5 minutes ago       Running             coredns                   2                   2c28c66880c49       coredns-7d764666f9-7f5tc                    kube-system
	81093ee51f213       6e38f40d628db       5 minutes ago       Running             storage-provisioner       5                   dbc7d58ad61d2       storage-provisioner                         kube-system
	0d8c20bf62c86       aa9d02839d8de       5 minutes ago       Exited              kube-apiserver            0                   e09d1278eb825       kube-apiserver-functional-230202            kube-system
	ba4c2319f4575       7bb6219ddab95       5 minutes ago       Running             kube-scheduler            2                   61f053dee340d       kube-scheduler-functional-230202            kube-system
	320e7a7f94fba       8a4ded35a3eb1       5 minutes ago       Running             kube-proxy                2                   a2676a1173776       kube-proxy-vfp52                            kube-system
	32b91d945d0d7       45f3cc72d235f       5 minutes ago       Running             kube-controller-manager   3                   241038bdf8d02       kube-controller-manager-functional-230202   kube-system
	bffdf6a0018d7       6e38f40d628db       5 minutes ago       Exited              storage-provisioner       4                   dbc7d58ad61d2       storage-provisioner                         kube-system
	9d8d898b308aa       a3e246e9556e9       5 minutes ago       Running             etcd                      3                   848346cbb05a0       etcd-functional-230202                      kube-system
	1560953807da2       a3e246e9556e9       6 minutes ago       Exited              etcd                      2                   848346cbb05a0       etcd-functional-230202                      kube-system
	73607b8608c13       45f3cc72d235f       6 minutes ago       Exited              kube-controller-manager   2                   241038bdf8d02       kube-controller-manager-functional-230202   kube-system
	d15f00d56c75a       8a4ded35a3eb1       6 minutes ago       Exited              kube-proxy                1                   a2676a1173776       kube-proxy-vfp52                            kube-system
	4d4643a1fb178       aa5e3ebc0dfed       6 minutes ago       Exited              coredns                   1                   2c28c66880c49       coredns-7d764666f9-7f5tc                    kube-system
	1d28d97941dcd       7bb6219ddab95       6 minutes ago       Exited              kube-scheduler            1                   61f053dee340d       kube-scheduler-functional-230202            kube-system
	
	
	==> containerd <==
	Dec 09 02:24:15 functional-230202 containerd[4205]: time="2025-12-09T02:24:15.037181418Z" level=info msg="container event discarded" container=f7c9a3a1ea6e3979fa8e00520920a5fbb54eff85bbdbd988da83bf6473876e8d type=CONTAINER_DELETED_EVENT
	Dec 09 02:24:15 functional-230202 containerd[4205]: time="2025-12-09T02:24:15.650809988Z" level=info msg="container event discarded" container=ba4c2319f4575d5b40bd09eeb9a8d693c7598056f6849388e599a87c0b7d6ff4 type=CONTAINER_CREATED_EVENT
	Dec 09 02:24:15 functional-230202 containerd[4205]: time="2025-12-09T02:24:15.892943932Z" level=info msg="container event discarded" container=ba4c2319f4575d5b40bd09eeb9a8d693c7598056f6849388e599a87c0b7d6ff4 type=CONTAINER_STARTED_EVENT
	Dec 09 02:24:15 functional-230202 containerd[4205]: time="2025-12-09T02:24:15.929369564Z" level=info msg="container event discarded" container=e09d1278eb825eb7a8417cdf813e8ae297e65677692e41318bc8c3107aaedd39 type=CONTAINER_CREATED_EVENT
	Dec 09 02:24:15 functional-230202 containerd[4205]: time="2025-12-09T02:24:15.929427142Z" level=info msg="container event discarded" container=e09d1278eb825eb7a8417cdf813e8ae297e65677692e41318bc8c3107aaedd39 type=CONTAINER_STARTED_EVENT
	Dec 09 02:24:15 functional-230202 containerd[4205]: time="2025-12-09T02:24:15.977822925Z" level=info msg="container event discarded" container=0d8c20bf62c86c9ef165fda970579006f6457739934ba1888b29a18f6fa4e86f type=CONTAINER_CREATED_EVENT
	Dec 09 02:24:16 functional-230202 containerd[4205]: time="2025-12-09T02:24:16.195293524Z" level=info msg="container event discarded" container=81093ee51f2133aae07f24822f4b3ea378421cba26eae43a71ab89c9b8b5f99d type=CONTAINER_CREATED_EVENT
	Dec 09 02:24:16 functional-230202 containerd[4205]: time="2025-12-09T02:24:16.220950310Z" level=info msg="container event discarded" container=00a49560fa23445976513c6e4cb39ee1d5eac4967ce4a2ff1230ea4d92194fc9 type=CONTAINER_CREATED_EVENT
	Dec 09 02:24:16 functional-230202 containerd[4205]: time="2025-12-09T02:24:16.328289118Z" level=info msg="container event discarded" container=0d8c20bf62c86c9ef165fda970579006f6457739934ba1888b29a18f6fa4e86f type=CONTAINER_STARTED_EVENT
	Dec 09 02:24:16 functional-230202 containerd[4205]: time="2025-12-09T02:24:16.503919045Z" level=info msg="container event discarded" container=00a49560fa23445976513c6e4cb39ee1d5eac4967ce4a2ff1230ea4d92194fc9 type=CONTAINER_STARTED_EVENT
	Dec 09 02:24:16 functional-230202 containerd[4205]: time="2025-12-09T02:24:16.641383958Z" level=info msg="container event discarded" container=81093ee51f2133aae07f24822f4b3ea378421cba26eae43a71ab89c9b8b5f99d type=CONTAINER_STARTED_EVENT
	Dec 09 02:24:16 functional-230202 containerd[4205]: time="2025-12-09T02:24:16.720045189Z" level=info msg="container event discarded" container=0d8c20bf62c86c9ef165fda970579006f6457739934ba1888b29a18f6fa4e86f type=CONTAINER_STOPPED_EVENT
	Dec 09 02:24:17 functional-230202 containerd[4205]: time="2025-12-09T02:24:17.066091627Z" level=info msg="container event discarded" container=b87043143e008f50c5a99183a89b1c21d9261237f958ce95a653941ba3f9fb12 type=CONTAINER_STOPPED_EVENT
	Dec 09 02:24:17 functional-230202 containerd[4205]: time="2025-12-09T02:24:17.066164614Z" level=info msg="container event discarded" container=37e6f93d58ed329dc9df54293f1bcc8cc1c3f33bbfcab5d6f118ccba90c96460 type=CONTAINER_STOPPED_EVENT
	Dec 09 02:24:17 functional-230202 containerd[4205]: time="2025-12-09T02:24:17.142876055Z" level=info msg="container event discarded" container=b87043143e008f50c5a99183a89b1c21d9261237f958ce95a653941ba3f9fb12 type=CONTAINER_DELETED_EVENT
	Dec 09 02:24:17 functional-230202 containerd[4205]: time="2025-12-09T02:24:17.154397797Z" level=info msg="container event discarded" container=16679e8835d87c24da7072405f21493a419abe21ac2605b25f3b2f1fef5dead5 type=CONTAINER_DELETED_EVENT
	Dec 09 02:24:17 functional-230202 containerd[4205]: time="2025-12-09T02:24:17.168728474Z" level=info msg="container event discarded" container=014c37675731b5ec36ccf55e32cd4482db07cd9645558928df3edf1de744a2c0 type=CONTAINER_CREATED_EVENT
	Dec 09 02:24:17 functional-230202 containerd[4205]: time="2025-12-09T02:24:17.256261937Z" level=info msg="container event discarded" container=014c37675731b5ec36ccf55e32cd4482db07cd9645558928df3edf1de744a2c0 type=CONTAINER_STARTED_EVENT
	Dec 09 02:24:37 functional-230202 containerd[4205]: time="2025-12-09T02:24:37.066751065Z" level=info msg="container event discarded" container=5bafc2fd5aa07b0e42ce8f796f4c9b76721c7b26be374b74976e7695954eefab type=CONTAINER_CREATED_EVENT
	Dec 09 02:24:37 functional-230202 containerd[4205]: time="2025-12-09T02:24:37.067304360Z" level=info msg="container event discarded" container=5bafc2fd5aa07b0e42ce8f796f4c9b76721c7b26be374b74976e7695954eefab type=CONTAINER_STARTED_EVENT
	Dec 09 02:24:39 functional-230202 containerd[4205]: time="2025-12-09T02:24:39.987769197Z" level=info msg="container event discarded" container=5bafc2fd5aa07b0e42ce8f796f4c9b76721c7b26be374b74976e7695954eefab type=CONTAINER_STOPPED_EVENT
	Dec 09 02:24:41 functional-230202 containerd[4205]: time="2025-12-09T02:24:41.013733568Z" level=info msg="container event discarded" container=2cce944d19e87207fa7a321975e97819520c1bffc68bb14b4ddc7ed2e11b93ee type=CONTAINER_CREATED_EVENT
	Dec 09 02:24:41 functional-230202 containerd[4205]: time="2025-12-09T02:24:41.013842586Z" level=info msg="container event discarded" container=2cce944d19e87207fa7a321975e97819520c1bffc68bb14b4ddc7ed2e11b93ee type=CONTAINER_STARTED_EVENT
	Dec 09 02:24:42 functional-230202 containerd[4205]: time="2025-12-09T02:24:42.426675064Z" level=info msg="container event discarded" container=85ed9e9ebc3eb37eee84432a3b41ccfc8428a5c1a842fbdf2f4f52ae6d49ffd1 type=CONTAINER_CREATED_EVENT
	Dec 09 02:24:42 functional-230202 containerd[4205]: time="2025-12-09T02:24:42.547378583Z" level=info msg="container event discarded" container=85ed9e9ebc3eb37eee84432a3b41ccfc8428a5c1a842fbdf2f4f52ae6d49ffd1 type=CONTAINER_STARTED_EVENT
	
	
	==> coredns [00a49560fa23445976513c6e4cb39ee1d5eac4967ce4a2ff1230ea4d92194fc9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] plugin/kubernetes: Warning: watch ended with error
	[INFO] plugin/kubernetes: Warning: watch ended with error
	[INFO] plugin/kubernetes: Warning: watch ended with error
	[INFO] 127.0.0.1:49203 - 16929 "HINFO IN 1033596573786798383.2353995115049048982. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.470488433s
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> coredns [4d4643a1fb17832071a666a03f7655765ff8ca395addc34f1198d2d882f6fe45] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:39830 - 48183 "HINFO IN 1396214294398800002.6324543457674767606. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.089183918s
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-230202
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-230202
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=functional-230202
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_17_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:17:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-230202
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:24:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:24:11 +0000   Tue, 09 Dec 2025 02:17:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:24:11 +0000   Tue, 09 Dec 2025 02:17:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:24:11 +0000   Tue, 09 Dec 2025 02:17:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:24:11 +0000   Tue, 09 Dec 2025 02:17:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.49
	  Hostname:    functional-230202
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 1feaf3b9b48d43b9ad39622a71ab3af6
	  System UUID:                1feaf3b9-b48d-43b9-ad39-622a71ab3af6
	  Boot ID:                    dad7da40-6912-4177-8a63-d34a261720f9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.4
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-xgbzb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  default                     hello-node-connect-9f67c86d4-4pf8j            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  default                     mysql-7d7b65bc95-6rb2c                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    4m49s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 coredns-7d764666f9-7f5tc                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m21s
	  kube-system                 etcd-functional-230202                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m28s
	  kube-system                 kube-apiserver-functional-230202              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-functional-230202     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-proxy-vfp52                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 kube-scheduler-functional-230202              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-lrn2g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-bq47q          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  7m22s  node-controller  Node functional-230202 event: Registered Node functional-230202 in Controller
	  Normal  RegisteredNode  6m16s  node-controller  Node functional-230202 event: Registered Node functional-230202 in Controller
	  Normal  RegisteredNode  5m28s  node-controller  Node functional-230202 event: Registered Node functional-230202 in Controller
	
	
	==> dmesg <==
	[  +1.193997] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 9 02:17] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.111453] kauditd_printk_skb: 121 callbacks suppressed
	[  +0.131678] kauditd_printk_skb: 180 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.026543] kauditd_printk_skb: 219 callbacks suppressed
	[Dec 9 02:18] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.034338] kauditd_printk_skb: 101 callbacks suppressed
	[  +8.723409] kauditd_printk_skb: 38 callbacks suppressed
	[  +3.610775] kauditd_printk_skb: 76 callbacks suppressed
	[  +4.018540] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.117645] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.037686] kauditd_printk_skb: 101 callbacks suppressed
	[Dec 9 02:19] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.287744] kauditd_printk_skb: 54 callbacks suppressed
	[  +0.966248] kauditd_printk_skb: 71 callbacks suppressed
	[  +5.080007] kauditd_printk_skb: 66 callbacks suppressed
	[  +1.231067] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.000187] kauditd_printk_skb: 158 callbacks suppressed
	[Dec 9 02:20] kauditd_printk_skb: 89 callbacks suppressed
	[  +5.962677] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.246869] crun[8092]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.524414] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [1560953807da2089255e02df18de1db062e40ddeced3fe06b1894492835cef56] <==
	{"level":"warn","ts":"2025-12-09T02:18:22.849757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:22.863631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:22.869494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:22.877850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:22.890494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:22.893974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:22.969135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48128","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T02:18:51.092290Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-09T02:18:51.092413Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-230202","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.49:2380"],"advertise-client-urls":["https://192.168.39.49:2379"]}
	{"level":"error","ts":"2025-12-09T02:18:51.092770Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T02:18:58.098510Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T02:18:58.098585Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:18:58.098619Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7f2a407b6bb4eb12","current-leader-member-id":"7f2a407b6bb4eb12"}
	{"level":"info","ts":"2025-12-09T02:18:58.098690Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-09T02:18:58.098705Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-09T02:18:58.099486Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T02:18:58.100144Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T02:18:58.100416Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T02:18:58.100842Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.49:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T02:18:58.100868Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.49:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T02:18:58.100879Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.49:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:18:58.106869Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.49:2380"}
	{"level":"error","ts":"2025-12-09T02:18:58.106946Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.49:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:18:58.107003Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.49:2380"}
	{"level":"info","ts":"2025-12-09T02:18:58.107031Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-230202","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.49:2380"],"advertise-client-urls":["https://192.168.39.49:2379"]}
	
	
	==> etcd [9d8d898b308aa681f44879beaed916c69541cdad7b0474868c29a9e41ce9f61f] <==
	{"level":"warn","ts":"2025-12-09T02:19:17.905423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:17.919329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:17.926115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:17.934501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:17.943763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:17.952838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:17.964479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:17.973081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:17.981186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:17.990987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:17.999580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:18.013761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:18.021783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:18.029932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:18.040476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:18.050672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:18.059534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:18.069632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:19:18.163554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:20:00.719651Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.901379ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-12-09T02:20:00.720504Z","caller":"traceutil/trace.go:172","msg":"trace[1312859481] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:848; }","duration":"178.837457ms","start":"2025-12-09T02:20:00.541655Z","end":"2025-12-09T02:20:00.720492Z","steps":["trace[1312859481] 'range keys from in-memory index tree'  (duration: 177.712683ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:20:05.505765Z","caller":"traceutil/trace.go:172","msg":"trace[774281596] linearizableReadLoop","detail":"{readStateIndex:958; appliedIndex:958; }","duration":"146.845079ms","start":"2025-12-09T02:20:05.358624Z","end":"2025-12-09T02:20:05.505469Z","steps":["trace[774281596] 'read index received'  (duration: 146.841344ms)","trace[774281596] 'applied index is now lower than readState.Index'  (duration: 3.017µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:20:05.508323Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.683382ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:20:05.508544Z","caller":"traceutil/trace.go:172","msg":"trace[1413083303] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:867; }","duration":"149.91493ms","start":"2025-12-09T02:20:05.358620Z","end":"2025-12-09T02:20:05.508535Z","steps":["trace[1413083303] 'agreement among raft nodes before linearized reading'  (duration: 147.494352ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:20:06.615868Z","caller":"traceutil/trace.go:172","msg":"trace[1765490815] transaction","detail":"{read_only:false; response_revision:869; number_of_response:1; }","duration":"154.085554ms","start":"2025-12-09T02:20:06.461768Z","end":"2025-12-09T02:20:06.615854Z","steps":["trace[1765490815] 'process raft request'  (duration: 153.932508ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:24:42 up 7 min,  0 users,  load average: 0.25, 0.62, 0.40
	Linux functional-230202 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [014c37675731b5ec36ccf55e32cd4482db07cd9645558928df3edf1de744a2c0] <==
	I1209 02:19:19.267880       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:19:19.740362       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1209 02:19:21.469282       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:19:21.471980       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:19:21.473907       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:19:21.488620       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:19:36.563299       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.155.28"}
	I1209 02:19:36.750944       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:19:40.461411       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.84.170"}
	I1209 02:19:41.871970       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.101.37"}
	I1209 02:19:42.769055       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:19:42.860080       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:19:42.905423       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:19:43.166007       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.229.3"}
	I1209 02:19:43.187614       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.46.248"}
	I1209 02:19:53.642089       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.143.11"}
	E1209 02:20:01.041349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.49:8441->192.168.39.1:48856: use of closed network connection
	E1209 02:20:02.385996       1 watch.go:270] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1209 02:20:09.671195       1 conn.go:339] Error on socket receive: read tcp 192.168.39.49:8441->192.168.39.1:33808: use of closed network connection
	E1209 02:20:14.967762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.49:8441->192.168.39.1:53524: use of closed network connection
	E1209 02:20:16.634893       1 conn.go:339] Error on socket receive: read tcp 192.168.39.49:8441->192.168.39.1:53550: use of closed network connection
	E1209 02:20:17.854095       1 conn.go:339] Error on socket receive: read tcp 192.168.39.49:8441->192.168.39.1:53562: use of closed network connection
	E1209 02:20:20.703954       1 conn.go:339] Error on socket receive: read tcp 192.168.39.49:8441->192.168.39.1:53590: use of closed network connection
	E1209 02:20:24.293562       1 conn.go:339] Error on socket receive: read tcp 192.168.39.49:8441->192.168.39.1:56458: use of closed network connection
	E1209 02:20:30.466610       1 conn.go:339] Error on socket receive: read tcp 192.168.39.49:8441->192.168.39.1:56472: use of closed network connection
	
	
	==> kube-apiserver [0d8c20bf62c86c9ef165fda970579006f6457739934ba1888b29a18f6fa4e86f] <==
	I1209 02:19:16.554832       1 options.go:263] external host was not specified, using 192.168.39.49
	I1209 02:19:16.567553       1 server.go:150] Version: v1.35.0-beta.0
	I1209 02:19:16.567597       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1209 02:19:16.575864       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [32b91d945d0d7483a97c44a086fc7d1e8ec12f759d79e011055a0435b22f7b71] <==
	E1209 02:19:18.841828       1 reflector.go:204] "Failed to watch" err="deployments.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"deployments\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Deployment"
	E1209 02:19:18.842041       1 reflector.go:204] "Failed to watch" err="endpoints is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"endpoints\" in API group \"\" at the cluster scope - error from a previous attempt: read tcp 192.168.39.49:52032->192.168.39.49:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Endpoints"
	E1209 02:19:18.846491       1 reflector.go:204] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope - error from a previous attempt: dial tcp 192.168.39.49:8441: connect: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.EndpointSlice"
	E1209 02:19:18.846606       1 reflector.go:204] "Failed to watch" err="clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ClusterRole"
	E1209 02:19:18.846829       1 reflector.go:204] "Failed to watch" err="certificatesigningrequests.certificates.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"certificatesigningrequests\" in API group \"certificates.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CertificateSigningRequest"
	E1209 02:19:18.846854       1 reflector.go:204] "Failed to watch" err="ingressclasses.networking.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"ingressclasses\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.IngressClass"
	E1209 02:19:18.846912       1 reflector.go:204] "Failed to watch" err="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"prioritylevelconfigurations\" in API group \"flowcontrol.apiserver.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PriorityLevelConfiguration"
	E1209 02:19:18.846935       1 reflector.go:204] "Failed to watch" err="horizontalpodautoscalers.autoscaling is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"horizontalpodautoscalers\" in API group \"autoscaling\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v2.HorizontalPodAutoscaler"
	E1209 02:19:18.846992       1 reflector.go:204] "Failed to watch" err="namespaces is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1209 02:19:18.847013       1 reflector.go:204] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1209 02:19:18.847033       1 reflector.go:204] "Failed to watch" err="jobs.batch is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"jobs\" in API group \"batch\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Job"
	E1209 02:19:18.847056       1 reflector.go:204] "Failed to watch" err="flowschemas.flowcontrol.apiserver.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"flowschemas\" in API group \"flowcontrol.apiserver.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.FlowSchema"
	E1209 02:19:18.847105       1 reflector.go:204] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1209 02:19:18.847129       1 reflector.go:204] "Failed to watch" err="limitranges is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"limitranges\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.LimitRange"
	E1209 02:19:18.847150       1 reflector.go:204] "Failed to watch" err="nodes is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1209 02:19:18.847168       1 reflector.go:204] "Failed to watch" err="cronjobs.batch is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"cronjobs\" in API group \"batch\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CronJob"
	E1209 02:19:18.847185       1 reflector.go:204] "Failed to watch" err="networkpolicies.networking.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.NetworkPolicy"
	E1209 02:19:18.858711       1 reflector.go:204] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope - error from a previous attempt: dial tcp 192.168.39.49:8441: connect: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1209 02:19:42.865594       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:19:42.893539       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:19:42.915161       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:19:42.919264       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:19:42.930020       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:19:42.931138       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:19:42.938306       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [73607b8608c13267320a464db270d987f8f7db263021b93b5f59dde741a198ca] <==
	I1209 02:18:26.831160       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1209 02:18:26.831249       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-230202"
	I1209 02:18:26.831275       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1209 02:18:26.831305       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.831407       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.831415       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.831460       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.831536       1 range_allocator.go:177] "Sending events to api server"
	I1209 02:18:26.831594       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1209 02:18:26.831599       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:18:26.831603       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.831656       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.831875       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.832175       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.832313       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.832336       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.831065       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.832584       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.832448       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.842911       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.844850       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:18:26.926837       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:26.926870       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:18:26.926876       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1209 02:18:26.945931       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [320e7a7f94fba494e63b43df03a7c3d8c74c5eca9b50c62ce0924a6f970ea247] <==
	I1209 02:19:11.148107       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 02:19:11.148180       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 02:19:11.149369       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:19:11.169335       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:19:11.169626       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:19:11.169680       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:19:11.174044       1 config.go:200] "Starting service config controller"
	I1209 02:19:11.179575       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:19:11.177894       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:19:11.181826       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:19:11.177930       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:19:11.182159       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:19:11.179332       1 config.go:309] "Starting node config controller"
	I1209 02:19:11.182169       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:19:11.182173       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:19:11.280070       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:19:11.282777       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:19:11.282748       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1209 02:19:19.170606       1 reflector.go:204] "Failed to watch" err="nodes \"functional-230202\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope - error from a previous attempt: dial tcp 192.168.39.49:8441: connect: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1209 02:19:19.170664       1 reflector.go:204] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope - error from a previous attempt: write tcp 192.168.39.49:51948->192.168.39.49:8441: write: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.EndpointSlice"
	E1209 02:19:19.170694       1 reflector.go:204] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1209 02:19:19.170729       1 reflector.go:204] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope - error from a previous attempt: read tcp 192.168.39.49:51960->192.168.39.49:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ServiceCIDR"
	
	
	==> kube-proxy [d15f00d56c75a2ba084aa12d8fa0707212f8053b93e1f3aba7b960b6644853c8] <==
	I1209 02:18:10.252441       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:18:29.254544       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:29.254590       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.49"]
	E1209 02:18:29.254687       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:18:29.291596       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 02:18:29.291936       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 02:18:29.292069       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:18:29.302902       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:18:29.303480       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:18:29.303629       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:18:29.308906       1 config.go:309] "Starting node config controller"
	I1209 02:18:29.308943       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:18:29.308950       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:18:29.309349       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:18:29.309376       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:18:29.309436       1 config.go:200] "Starting service config controller"
	I1209 02:18:29.309440       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:18:29.309450       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:18:29.309453       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:18:29.409756       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:18:29.409809       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:18:29.409823       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1d28d97941dcdfcc736be694107937567752b27c0d37893075ef99c4d3a8592c] <==
	I1209 02:18:10.397638       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:18:10.404608       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.49:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.49:8441: connect: connection refused
	W1209 02:18:10.404628       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:18:10.404634       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:18:10.414892       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1209 02:18:10.414929       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:18:10.417032       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:18:10.417048       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:18:10.417056       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:18:10.417168       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1209 02:18:23.611850       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	I1209 02:18:28.618005       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:13.497811       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1209 02:19:13.497885       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1209 02:19:13.497891       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1209 02:19:13.498061       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1209 02:19:13.498088       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ba4c2319f4575d5b40bd09eeb9a8d693c7598056f6849388e599a87c0b7d6ff4] <==
	E1209 02:19:18.939674       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:19:19.042293       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1209 02:19:19.043514       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1209 02:19:19.043880       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1209 02:19:19.045344       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1209 02:19:19.045639       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1209 02:19:19.045916       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1209 02:19:19.046150       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1209 02:19:19.047334       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1209 02:19:19.047421       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1209 02:19:19.047463       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1209 02:19:19.047963       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1209 02:19:19.047986       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1209 02:19:19.048490       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1209 02:19:19.050427       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1209 02:19:19.050649       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1209 02:19:19.050689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1209 02:19:19.050969       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1209 02:19:19.051003       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1209 02:19:19.060804       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1209 02:19:19.060986       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1209 02:19:19.061137       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1209 02:19:19.147572       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1209 02:19:19.147660       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1209 02:19:19.147726       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	
	
	==> kubelet <==
	Dec 09 02:23:46 functional-230202 kubelet[5131]: E1209 02:23:46.934713    5131 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-bq47q" containerName="kubernetes-dashboard"
	Dec 09 02:23:46 functional-230202 kubelet[5131]: E1209 02:23:46.936154    5131 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-bq47q" podUID="7d7f17e0-8e92-4b65-9614-6ac4ca601c34"
	Dec 09 02:23:49 functional-230202 kubelet[5131]: E1209 02:23:49.935249    5131 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-lrn2g" containerName="dashboard-metrics-scraper"
	Dec 09 02:23:49 functional-230202 kubelet[5131]: E1209 02:23:49.937399    5131 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-lrn2g" podUID="34a49a75-8ee5-4711-97df-f1666
fb05364"
	Dec 09 02:23:59 functional-230202 kubelet[5131]: E1209 02:23:59.933894    5131 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-bq47q" containerName="kubernetes-dashboard"
	Dec 09 02:23:59 functional-230202 kubelet[5131]: E1209 02:23:59.935654    5131 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-bq47q" podUID="7d7f17e0-8e92-4b65-9614-6ac4ca601c34"
	Dec 09 02:24:03 functional-230202 kubelet[5131]: E1209 02:24:03.934589    5131 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-lrn2g" containerName="dashboard-metrics-scraper"
	Dec 09 02:24:03 functional-230202 kubelet[5131]: E1209 02:24:03.936111    5131 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-lrn2g" podUID="34a49a75-8ee5-4711-97df-f1666
fb05364"
	Dec 09 02:24:09 functional-230202 kubelet[5131]: E1209 02:24:09.934710    5131 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-230202" containerName="kube-controller-manager"
	Dec 09 02:24:12 functional-230202 kubelet[5131]: E1209 02:24:12.934938    5131 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-230202" containerName="kube-scheduler"
	Dec 09 02:24:14 functional-230202 kubelet[5131]: E1209 02:24:14.934422    5131 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-bq47q" containerName="kubernetes-dashboard"
	Dec 09 02:24:14 functional-230202 kubelet[5131]: E1209 02:24:14.934770    5131 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7f5tc" containerName="coredns"
	Dec 09 02:24:14 functional-230202 kubelet[5131]: E1209 02:24:14.935931    5131 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-bq47q" podUID="7d7f17e0-8e92-4b65-9614-6ac4ca601c34"
	Dec 09 02:24:15 functional-230202 kubelet[5131]: E1209 02:24:15.934475    5131 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-lrn2g" containerName="dashboard-metrics-scraper"
	Dec 09 02:24:15 functional-230202 kubelet[5131]: E1209 02:24:15.936058    5131 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-lrn2g" podUID="34a49a75-8ee5-4711-97df-f1666
fb05364"
	Dec 09 02:24:26 functional-230202 kubelet[5131]: E1209 02:24:26.937961    5131 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-bq47q" containerName="kubernetes-dashboard"
	Dec 09 02:24:26 functional-230202 kubelet[5131]: E1209 02:24:26.939594    5131 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-bq47q" podUID="7d7f17e0-8e92-4b65-9614-6ac4ca601c34"
	Dec 09 02:24:29 functional-230202 kubelet[5131]: E1209 02:24:29.934438    5131 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-lrn2g" containerName="dashboard-metrics-scraper"
	Dec 09 02:24:29 functional-230202 kubelet[5131]: E1209 02:24:29.935747    5131 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-lrn2g" podUID="34a49a75-8ee5-4711-97df-f1666
fb05364"
	Dec 09 02:24:31 functional-230202 kubelet[5131]: E1209 02:24:31.934025    5131 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-230202" containerName="etcd"
	Dec 09 02:24:32 functional-230202 kubelet[5131]: E1209 02:24:32.934767    5131 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-230202" containerName="kube-apiserver"
	Dec 09 02:24:37 functional-230202 kubelet[5131]: E1209 02:24:37.934622    5131 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-bq47q" containerName="kubernetes-dashboard"
	Dec 09 02:24:37 functional-230202 kubelet[5131]: E1209 02:24:37.936391    5131 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-bq47q" podUID="7d7f17e0-8e92-4b65-9614-6ac4ca601c34"
	Dec 09 02:24:42 functional-230202 kubelet[5131]: E1209 02:24:42.934110    5131 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-lrn2g" containerName="dashboard-metrics-scraper"
	Dec 09 02:24:42 functional-230202 kubelet[5131]: E1209 02:24:42.936130    5131 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-lrn2g" podUID="34a49a75-8ee5-4711-97df-f1666
fb05364"
	
	
	==> storage-provisioner [81093ee51f2133aae07f24822f4b3ea378421cba26eae43a71ab89c9b8b5f99d] <==
	W1209 02:24:18.094451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:20.098459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:20.103308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:22.107786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:22.113100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:24.117265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:24.125401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:26.130046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:26.135138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:28.138914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:28.144920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:30.148573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:30.156732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:32.160759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:32.166288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:34.169725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:34.178337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:36.181846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:36.187610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:38.191556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:38.197127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:40.200094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:40.205869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:42.210360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:24:42.219047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bffdf6a0018d7b1687fd62851e0692a154e2ad6f7b524e5b8a00a47b14f7c897] <==
	I1209 02:19:06.077012       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:19:06.079162       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-230202 -n functional-230202
helpers_test.go:269: (dbg) Run:  kubectl --context functional-230202 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount dashboard-metrics-scraper-5565989548-lrn2g kubernetes-dashboard-b84665fb8-bq47q
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-230202 describe pod busybox-mount dashboard-metrics-scraper-5565989548-lrn2g kubernetes-dashboard-b84665fb8-bq47q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-230202 describe pod busybox-mount dashboard-metrics-scraper-5565989548-lrn2g kubernetes-dashboard-b84665fb8-bq47q: exit status 1 (73.304125ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-230202/192.168.39.49
	Start Time:       Tue, 09 Dec 2025 02:20:00 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  containerd://3e2af7d43d0de82b73016072e06234e129562ba88cc3030f38a246a01d5a0dd7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 09 Dec 2025 02:20:11 +0000
	      Finished:     Tue, 09 Dec 2025 02:20:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r9d5x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-r9d5x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  4m43s  default-scheduler  Successfully assigned default/busybox-mount to functional-230202
	  Normal  Pulling    4m42s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m32s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 908ms (9.905s including waiting). Image size: 2395207 bytes.
	  Normal  Created    4m32s  kubelet            Container created
	  Normal  Started    4m32s  kubelet            Container started

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-lrn2g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-bq47q" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-230202 describe pod busybox-mount dashboard-metrics-scraper-5565989548-lrn2g kubernetes-dashboard-b84665fb8-bq47q: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.14s)

                                                
                                    

Test pass (384/437)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.62
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 3.21
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.16
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.1
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.16
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.65
31 TestOffline 107.13
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 128.65
38 TestAddons/serial/Volcano 44.39
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 8.51
44 TestAddons/parallel/Registry 14.92
45 TestAddons/parallel/RegistryCreds 0.65
46 TestAddons/parallel/Ingress 17.7
47 TestAddons/parallel/InspektorGadget 10.79
48 TestAddons/parallel/MetricsServer 6
50 TestAddons/parallel/CSI 48.04
51 TestAddons/parallel/Headlamp 18.74
52 TestAddons/parallel/CloudSpanner 6.55
54 TestAddons/parallel/NvidiaDevicePlugin 6.47
55 TestAddons/parallel/Yakd 11.95
57 TestAddons/StoppedEnableDisable 87.67
58 TestCertOptions 85.22
59 TestCertExpiration 297.57
61 TestForceSystemdFlag 65.28
62 TestForceSystemdEnv 65.14
67 TestErrorSpam/setup 43.71
68 TestErrorSpam/start 0.35
69 TestErrorSpam/status 0.69
70 TestErrorSpam/pause 1.51
71 TestErrorSpam/unpause 1.75
72 TestErrorSpam/stop 4.8
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 78.03
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 46.31
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.82
84 TestFunctional/serial/CacheCmd/cache/add_local 1.33
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.37
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 38.43
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.22
95 TestFunctional/serial/LogsFileCmd 1.25
96 TestFunctional/serial/InvalidService 4.5
98 TestFunctional/parallel/ConfigCmd 0.43
99 TestFunctional/parallel/DashboardCmd 19.83
100 TestFunctional/parallel/DryRun 0.27
101 TestFunctional/parallel/InternationalLanguage 0.15
102 TestFunctional/parallel/StatusCmd 0.93
106 TestFunctional/parallel/ServiceCmdConnect 349.66
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 39.85
110 TestFunctional/parallel/SSHCmd 0.34
111 TestFunctional/parallel/CpCmd 1.21
112 TestFunctional/parallel/MySQL 31.66
113 TestFunctional/parallel/FileSync 0.19
114 TestFunctional/parallel/CertSync 1.17
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
122 TestFunctional/parallel/License 0.41
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.41
128 TestFunctional/parallel/ImageCommands/Setup 1
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.48
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.6
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.74
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.08
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
148 TestFunctional/parallel/Version/short 0.08
149 TestFunctional/parallel/Version/components 0.47
150 TestFunctional/parallel/ServiceCmd/DeployApp 357.16
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
152 TestFunctional/parallel/ProfileCmd/profile_list 0.31
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
154 TestFunctional/parallel/MountCmd/any-port 5.17
155 TestFunctional/parallel/MountCmd/specific-port 1.31
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.41
157 TestFunctional/parallel/ServiceCmd/List 1.2
158 TestFunctional/parallel/ServiceCmd/JSONOutput 1.2
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.24
160 TestFunctional/parallel/ServiceCmd/Format 0.24
161 TestFunctional/parallel/ServiceCmd/URL 0.25
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 76.72
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 40.27
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.08
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.06
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.26
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.35
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 47.56
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.33
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.32
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 3.88
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.25
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.13
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.94
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 8.41
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.17
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 28.49
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.38
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.3
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 36.97
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.22
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.22
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.33
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.46
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.24
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.81
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.81
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.6
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.19
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.24
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.45
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.41
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.25
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.44
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.28
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.37
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.29
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.08
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.6
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.61
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.68
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 1.51
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.43
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 16.41
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.38
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.34
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.35
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.61
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.1
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 207.17
262 TestMultiControlPlane/serial/DeployApp 4.75
263 TestMultiControlPlane/serial/PingHostFromPods 1.4
264 TestMultiControlPlane/serial/AddWorkerNode 44.85
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
267 TestMultiControlPlane/serial/CopyFile 11.14
268 TestMultiControlPlane/serial/StopSecondaryNode 88.17
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.55
270 TestMultiControlPlane/serial/RestartSecondaryNode 32.08
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 383.82
273 TestMultiControlPlane/serial/DeleteSecondaryNode 6.72
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
275 TestMultiControlPlane/serial/StopCluster 245.69
276 TestMultiControlPlane/serial/RestartCluster 113.03
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
278 TestMultiControlPlane/serial/AddSecondaryNode 75.86
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.72
284 TestJSONOutput/start/Command 78.78
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.67
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.62
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.74
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.23
312 TestMainNoArgs 0.07
313 TestMinikubeProfile 82.3
316 TestMountStart/serial/StartWithMountFirst 24.29
317 TestMountStart/serial/VerifyMountFirst 0.31
318 TestMountStart/serial/StartWithMountSecond 24.61
319 TestMountStart/serial/VerifyMountSecond 0.32
320 TestMountStart/serial/DeleteFirst 0.71
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.36
323 TestMountStart/serial/RestartStopped 21.41
324 TestMountStart/serial/VerifyMountPostStop 0.33
327 TestMultiNode/serial/FreshStart2Nodes 102.16
328 TestMultiNode/serial/DeployApp2Nodes 3.73
329 TestMultiNode/serial/PingHostFrom2Pods 0.87
330 TestMultiNode/serial/AddNode 43.29
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.46
333 TestMultiNode/serial/CopyFile 6.11
334 TestMultiNode/serial/StopNode 2.13
335 TestMultiNode/serial/StartAfterStop 35.27
336 TestMultiNode/serial/RestartKeepsNodes 290.09
337 TestMultiNode/serial/DeleteNode 2.03
338 TestMultiNode/serial/StopMultiNode 171.09
339 TestMultiNode/serial/RestartMultiNode 81.16
340 TestMultiNode/serial/ValidateNameConflict 42.81
345 TestPreload 141.69
347 TestScheduledStopUnix 113.45
351 TestRunningBinaryUpgrade 130.65
353 TestKubernetesUpgrade 148.51
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
357 TestNoKubernetes/serial/StartWithK8s 83.29
358 TestNoKubernetes/serial/StartWithStopK8s 41.07
359 TestStoppedBinaryUpgrade/Setup 0.64
360 TestStoppedBinaryUpgrade/Upgrade 111.63
361 TestNoKubernetes/serial/Start 28.55
370 TestPause/serial/Start 74.05
371 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
372 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
373 TestNoKubernetes/serial/ProfileList 4.61
374 TestNoKubernetes/serial/Stop 1.34
375 TestNoKubernetes/serial/StartNoArgs 68.34
376 TestPause/serial/SecondStartNoReconfiguration 68.39
384 TestNetworkPlugins/group/false 4.23
385 TestStoppedBinaryUpgrade/MinikubeLogs 1.81
389 TestISOImage/Setup 25.09
390 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
392 TestISOImage/Binaries/crictl 0.2
393 TestISOImage/Binaries/curl 0.2
394 TestISOImage/Binaries/docker 0.21
395 TestISOImage/Binaries/git 0.19
396 TestISOImage/Binaries/iptables 0.23
397 TestISOImage/Binaries/podman 0.21
398 TestISOImage/Binaries/rsync 0.21
399 TestISOImage/Binaries/socat 0.21
400 TestISOImage/Binaries/wget 0.2
401 TestISOImage/Binaries/VBoxControl 0.19
402 TestISOImage/Binaries/VBoxService 0.2
403 TestPause/serial/Pause 0.68
404 TestPause/serial/VerifyStatus 0.22
405 TestPause/serial/Unpause 0.69
406 TestPause/serial/PauseAgain 1.01
407 TestPause/serial/DeletePaused 1.12
408 TestPause/serial/VerifyDeletedResources 0.43
410 TestStartStop/group/old-k8s-version/serial/FirstStart 88.99
412 TestStartStop/group/no-preload/serial/FirstStart 107.3
414 TestStartStop/group/embed-certs/serial/FirstStart 117.95
415 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
416 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.24
417 TestStartStop/group/old-k8s-version/serial/Stop 83.65
418 TestStartStop/group/no-preload/serial/DeployApp 8.29
419 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.91
420 TestStartStop/group/no-preload/serial/Stop 71.14
421 TestStartStop/group/embed-certs/serial/DeployApp 7.31
422 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
423 TestStartStop/group/embed-certs/serial/Stop 85.35
424 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
425 TestStartStop/group/old-k8s-version/serial/SecondStart 43.3
426 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
427 TestStartStop/group/no-preload/serial/SecondStart 43.9
428 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 9.01
429 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
430 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
431 TestStartStop/group/old-k8s-version/serial/Pause 2.83
433 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.8
434 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
435 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
436 TestStartStop/group/embed-certs/serial/SecondStart 62.45
437 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
438 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
439 TestStartStop/group/no-preload/serial/Pause 2.76
441 TestStartStop/group/newest-cni/serial/FirstStart 65.09
442 TestNetworkPlugins/group/auto/Start 121.6
443 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
444 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
445 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
446 TestStartStop/group/embed-certs/serial/Pause 3.28
447 TestStartStop/group/newest-cni/serial/DeployApp 0
448 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.39
449 TestNetworkPlugins/group/kindnet/Start 61.7
450 TestStartStop/group/newest-cni/serial/Stop 3.15
451 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
452 TestStartStop/group/newest-cni/serial/SecondStart 48.91
453 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
454 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.35
455 TestStartStop/group/default-k8s-diff-port/serial/Stop 80.85
456 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
457 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
458 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
459 TestStartStop/group/newest-cni/serial/Pause 2.52
460 TestNetworkPlugins/group/calico/Start 73.32
461 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
462 TestNetworkPlugins/group/auto/KubeletFlags 0.23
463 TestNetworkPlugins/group/auto/NetCatPod 10.64
464 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
465 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
466 TestNetworkPlugins/group/auto/DNS 0.22
467 TestNetworkPlugins/group/auto/Localhost 0.12
468 TestNetworkPlugins/group/auto/HairPin 0.14
469 TestNetworkPlugins/group/kindnet/DNS 0.17
470 TestNetworkPlugins/group/kindnet/Localhost 0.14
471 TestNetworkPlugins/group/kindnet/HairPin 0.12
472 TestNetworkPlugins/group/custom-flannel/Start 74.1
473 TestNetworkPlugins/group/enable-default-cni/Start 100.13
474 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
475 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 81.66
476 TestNetworkPlugins/group/calico/ControllerPod 6.01
477 TestNetworkPlugins/group/calico/KubeletFlags 0.23
478 TestNetworkPlugins/group/calico/NetCatPod 12.36
479 TestNetworkPlugins/group/calico/DNS 0.22
480 TestNetworkPlugins/group/calico/Localhost 0.18
481 TestNetworkPlugins/group/calico/HairPin 0.17
482 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
483 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
484 TestNetworkPlugins/group/flannel/Start 75.5
485 TestNetworkPlugins/group/custom-flannel/DNS 0.43
486 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
487 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
488 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
489 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
490 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
491 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.36
492 TestNetworkPlugins/group/bridge/Start 87.46
493 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
494 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
496 TestISOImage/PersistentMounts//data 0.21
497 TestISOImage/PersistentMounts//var/lib/docker 0.2
498 TestISOImage/PersistentMounts//var/lib/cni 0.21
499 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
500 TestISOImage/PersistentMounts//var/lib/minikube 0.2
501 TestISOImage/PersistentMounts//var/lib/toolbox 0.21
502 TestISOImage/PersistentMounts//var/lib/boot2docker 0.2
503 TestISOImage/VersionJSON 0.19
504 TestISOImage/eBPFSupport 0.18
505 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
506 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
507 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
508 TestNetworkPlugins/group/flannel/ControllerPod 6.01
509 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
510 TestNetworkPlugins/group/flannel/NetCatPod 9.22
511 TestNetworkPlugins/group/flannel/DNS 0.13
512 TestNetworkPlugins/group/flannel/Localhost 0.12
513 TestNetworkPlugins/group/flannel/HairPin 0.12
514 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
515 TestNetworkPlugins/group/bridge/NetCatPod 9.22
516 TestNetworkPlugins/group/bridge/DNS 0.16
517 TestNetworkPlugins/group/bridge/Localhost 0.12
518 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (6.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-369057 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-369057 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.617281011s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1209 01:55:37.840125  789441 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1209 01:55:37.840244  789441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-369057
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-369057: exit status 85 (76.278628ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-369057 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-369057 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:31
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:31.282221  789454 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:31.282356  789454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:31.282369  789454 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:31.282378  789454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:31.282585  789454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	W1209 01:55:31.282757  789454 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22081-785489/.minikube/config/config.json: open /home/jenkins/minikube-integration/22081-785489/.minikube/config/config.json: no such file or directory
	I1209 01:55:31.283352  789454 out.go:368] Setting JSON to true
	I1209 01:55:31.284457  789454 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":27481,"bootTime":1765217850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:31.284522  789454 start.go:143] virtualization: kvm guest
	I1209 01:55:31.290358  789454 out.go:99] [download-only-369057] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1209 01:55:31.290533  789454 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 01:55:31.290598  789454 notify.go:221] Checking for updates...
	I1209 01:55:31.291849  789454 out.go:171] MINIKUBE_LOCATION=22081
	I1209 01:55:31.293402  789454 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:31.295072  789454 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 01:55:31.296854  789454 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 01:55:31.298243  789454 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 01:55:31.300477  789454 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 01:55:31.300783  789454 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 01:55:31.336140  789454 out.go:99] Using the kvm2 driver based on user configuration
	I1209 01:55:31.336171  789454 start.go:309] selected driver: kvm2
	I1209 01:55:31.336183  789454 start.go:927] validating driver "kvm2" against <nil>
	I1209 01:55:31.336629  789454 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 01:55:31.337376  789454 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1209 01:55:31.337582  789454 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 01:55:31.337611  789454 cni.go:84] Creating CNI manager for ""
	I1209 01:55:31.337677  789454 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1209 01:55:31.337688  789454 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 01:55:31.337776  789454 start.go:353] cluster config:
	{Name:download-only-369057 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-369057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 01:55:31.338051  789454 iso.go:125] acquiring lock: {Name:mk29a40ab0d6eac4567e308b5229766210ecee59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 01:55:31.340214  789454 out.go:99] Downloading VM boot image ...
	I1209 01:55:31.340252  789454 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22081-785489/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1209 01:55:34.500723  789454 out.go:99] Starting "download-only-369057" primary control-plane node in "download-only-369057" cluster
	I1209 01:55:34.500798  789454 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1209 01:55:34.522418  789454 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1209 01:55:34.522463  789454 cache.go:65] Caching tarball of preloaded images
	I1209 01:55:34.522678  789454 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1209 01:55:34.524394  789454 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1209 01:55:34.524419  789454 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1209 01:55:34.551353  789454 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1209 01:55:34.551472  789454 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-369057 host does not exist
	  To start a cluster, run: "minikube start -p download-only-369057"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-369057
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-116733 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-116733 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (3.205143421s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1209 01:55:41.435653  789441 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1209 01:55:41.435721  789441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-116733
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-116733: exit status 85 (76.780171ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-369057 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-369057 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-369057                                                                                                                                                             │ download-only-369057 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ -o=json --download-only -p download-only-116733 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-116733 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:38
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:38.285209  789635 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:38.285336  789635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:38.285349  789635 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:38.285356  789635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:38.285572  789635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 01:55:38.286050  789635 out.go:368] Setting JSON to true
	I1209 01:55:38.287034  789635 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":27488,"bootTime":1765217850,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:38.287088  789635 start.go:143] virtualization: kvm guest
	I1209 01:55:38.288914  789635 out.go:99] [download-only-116733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 01:55:38.289089  789635 notify.go:221] Checking for updates...
	I1209 01:55:38.290122  789635 out.go:171] MINIKUBE_LOCATION=22081
	I1209 01:55:38.291787  789635 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:38.292873  789635 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 01:55:38.294115  789635 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 01:55:38.297369  789635 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-116733 host does not exist
	  To start a cluster, run: "minikube start -p download-only-116733"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-116733
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-000021 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-000021 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (3.099468835s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1209 01:55:44.912890  789441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1209 01:55:44.912945  789441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-785489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-000021
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-000021: exit status 85 (75.264666ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                            ARGS                                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-369057 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd        │ download-only-369057 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-369057                                                                                                                                                                    │ download-only-369057 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ -o=json --download-only -p download-only-116733 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd        │ download-only-116733 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-116733                                                                                                                                                                    │ download-only-116733 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ -o=json --download-only -p download-only-000021 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-000021 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:41
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:41.867098  789816 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:41.867242  789816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:41.867252  789816 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:41.867257  789816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:41.867464  789816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 01:55:41.867915  789816 out.go:368] Setting JSON to true
	I1209 01:55:41.868840  789816 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":27492,"bootTime":1765217850,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:41.868900  789816 start.go:143] virtualization: kvm guest
	I1209 01:55:41.870754  789816 out.go:99] [download-only-000021] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 01:55:41.870920  789816 notify.go:221] Checking for updates...
	I1209 01:55:41.872309  789816 out.go:171] MINIKUBE_LOCATION=22081
	I1209 01:55:41.873735  789816 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:41.875179  789816 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 01:55:41.876451  789816 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 01:55:41.877746  789816 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-000021 host does not exist
	  To start a cluster, run: "minikube start -p download-only-000021"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-000021
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 01:55:45.721543  789441 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-212052 --alsologtostderr --binary-mirror http://127.0.0.1:45175 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-212052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-212052
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (107.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-494468 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-494468 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd: (1m46.23758445s)
helpers_test.go:175: Cleaning up "offline-containerd-494468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-494468
--- PASS: TestOffline (107.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1060: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-520986
addons_test.go:1060: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-520986: exit status 85 (68.61949ms)

                                                
                                                
-- stdout --
	* Profile "addons-520986" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-520986"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1071: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-520986
addons_test.go:1071: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-520986: exit status 85 (68.541847ms)

                                                
                                                
-- stdout --
	* Profile "addons-520986" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-520986"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (128.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p addons-520986 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p addons-520986 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.64480117s)
--- PASS: TestAddons/Setup (128.65s)

                                                
                                    
x
+
TestAddons/serial/Volcano (44.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:944: volcano-controller stabilized in 26.385742ms
addons_test.go:928: volcano-scheduler stabilized in 27.270393ms
addons_test.go:936: volcano-admission stabilized in 27.535295ms
addons_test.go:950: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-bcd2g" [329ea2c4-6147-4785-a831-74c6a422d69e] Running
addons_test.go:950: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004281695s
addons_test.go:954: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-628tl" [7b98929c-c714-458f-b902-39082d4c6793] Running
addons_test.go:954: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004153936s
addons_test.go:958: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-njftg" [ec5a11a4-6f28-4d06-93ad-dc0dc8cfc4ff] Running
addons_test.go:958: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004164512s
addons_test.go:963: (dbg) Run:  kubectl --context addons-520986 delete -n volcano-system job volcano-admission-init
addons_test.go:969: (dbg) Run:  kubectl --context addons-520986 create -f testdata/vcjob.yaml
addons_test.go:977: (dbg) Run:  kubectl --context addons-520986 get vcjob -n my-volcano
addons_test.go:995: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [3dd013a8-9869-48b7-bb7c-317cc7a7c5ec] Pending
helpers_test.go:352: "test-job-nginx-0" [3dd013a8-9869-48b7-bb7c-317cc7a7c5ec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [3dd013a8-9869-48b7-bb7c-317cc7a7c5ec] Running
addons_test.go:995: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.004612595s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable volcano --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 addons disable volcano --alsologtostderr -v=1: (11.994786932s)
--- PASS: TestAddons/serial/Volcano (44.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:690: (dbg) Run:  kubectl --context addons-520986 create ns new-namespace
addons_test.go:704: (dbg) Run:  kubectl --context addons-520986 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:735: (dbg) Run:  kubectl --context addons-520986 create -f testdata/busybox.yaml
addons_test.go:742: (dbg) Run:  kubectl --context addons-520986 create sa gcp-auth-test
addons_test.go:748: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5c031aa0-247f-47c2-adbd-d72c83753078] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5c031aa0-247f-47c2-adbd-d72c83753078] Running
addons_test.go:748: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004858056s
addons_test.go:754: (dbg) Run:  kubectl --context addons-520986 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:766: (dbg) Run:  kubectl --context addons-520986 describe sa gcp-auth-test
addons_test.go:804: (dbg) Run:  kubectl --context addons-520986 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:442: registry stabilized in 9.65361ms
addons_test.go:444: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-vlvl7" [101e7e22-6338-450e-b175-a29aa66aa838] Running
addons_test.go:444: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005783841s
addons_test.go:447: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-md9zq" [b449333e-cc2d-4741-a901-fdcbae2dbeeb] Running
addons_test.go:447: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004102847s
addons_test.go:452: (dbg) Run:  kubectl --context addons-520986 delete po -l run=registry-test --now
addons_test.go:457: (dbg) Run:  kubectl --context addons-520986 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:457: (dbg) Done: kubectl --context addons-520986 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.025813003s)
addons_test.go:471: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 ip
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.92s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:383: registry-creds stabilized in 21.460271ms
addons_test.go:385: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-520986
addons_test.go:392: (dbg) Run:  kubectl --context addons-520986 -n kube-system get secret -o yaml
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:269: (dbg) Run:  kubectl --context addons-520986 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:294: (dbg) Run:  kubectl --context addons-520986 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:307: (dbg) Run:  kubectl --context addons-520986 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b353f50d-1988-4910-a3f6-01d103119dfa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [b353f50d-1988-4910-a3f6-01d103119dfa] Running
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.005311253s
I1209 01:59:26.057361  789441 kapi.go:150] Service nginx in namespace default found.
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:348: (dbg) Run:  kubectl --context addons-520986 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 ip
addons_test.go:359: (dbg) Run:  nslookup hello-john.test 192.168.39.56
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 addons disable ingress-dns --alsologtostderr -v=1: (1.386200015s)
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable ingress --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 addons disable ingress --alsologtostderr -v=1: (8.038951812s)
--- PASS: TestAddons/parallel/Ingress (17.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:883: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9qdtw" [e83bd46d-c482-4222-9a6c-448c736d392d] Running
addons_test.go:883: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006915722s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 addons disable inspektor-gadget --alsologtostderr -v=1: (5.778810693s)
--- PASS: TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:515: metrics-server stabilized in 10.136237ms
addons_test.go:517: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-6h6ks" [9933e398-1bd2-4f95-9968-ac571b18b98d] Running
addons_test.go:517: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006474972s
addons_test.go:523: (dbg) Run:  kubectl --context addons-520986 top pods -n kube-system
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 01:58:57.131013  789441 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 01:58:57.137546  789441 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 01:58:57.137580  789441 kapi.go:107] duration metric: took 6.579913ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:609: csi-hostpath-driver pods stabilized in 6.595136ms
addons_test.go:612: (dbg) Run:  kubectl --context addons-520986 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/12/09 01:59:11 [DEBUG] GET http://192.168.39.56:5000
addons_test.go:622: (dbg) Run:  kubectl --context addons-520986 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [23840c24-aaa0-44db-92f5-861192191764] Pending
helpers_test.go:352: "task-pv-pod" [23840c24-aaa0-44db-92f5-861192191764] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [23840c24-aaa0-44db-92f5-861192191764] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004934821s
addons_test.go:632: (dbg) Run:  kubectl --context addons-520986 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:637: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-520986 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-520986 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:642: (dbg) Run:  kubectl --context addons-520986 delete pod task-pv-pod
addons_test.go:648: (dbg) Run:  kubectl --context addons-520986 delete pvc hpvc
addons_test.go:654: (dbg) Run:  kubectl --context addons-520986 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:659: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520986 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:664: (dbg) Run:  kubectl --context addons-520986 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:669: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [ac38a665-d29b-4359-a463-ce096418e9cd] Pending
helpers_test.go:352: "task-pv-pod-restore" [ac38a665-d29b-4359-a463-ce096418e9cd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [ac38a665-d29b-4359-a463-ce096418e9cd] Running
addons_test.go:669: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0039928s
addons_test.go:674: (dbg) Run:  kubectl --context addons-520986 delete pod task-pv-pod-restore
addons_test.go:678: (dbg) Run:  kubectl --context addons-520986 delete pvc hpvc-restore
addons_test.go:682: (dbg) Run:  kubectl --context addons-520986 delete volumesnapshot new-snapshot-demo
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.823929178s)
--- PASS: TestAddons/parallel/CSI (48.04s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:868: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-520986 --alsologtostderr -v=1
addons_test.go:873: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-8rjw8" [8344d961-1813-4385-af81-ba7a668a4061] Pending
helpers_test.go:352: "headlamp-dfcdc64b-8rjw8" [8344d961-1813-4385-af81-ba7a668a4061] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-8rjw8" [8344d961-1813-4385-af81-ba7a668a4061] Running
addons_test.go:873: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004913431s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 addons disable headlamp --alsologtostderr -v=1: (5.810798916s)
--- PASS: TestAddons/parallel/Headlamp (18.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:900: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-2qcdq" [ec1f1db1-60d1-4cdb-980c-a1fb8433bc3f] Running
addons_test.go:900: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004490179s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1085: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-fmfwp" [6680e716-57e7-4dac-bfc6-474c174bfa12] Running
addons_test.go:1085: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004547381s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1107: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vx9dq" [81ee2779-8b62-4894-9111-e4372baad8a1] Running
addons_test.go:1107: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004169079s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-520986 addons disable yakd --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-520986 addons disable yakd --alsologtostderr -v=1: (5.94572823s)
--- PASS: TestAddons/parallel/Yakd (11.95s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (87.67s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:177: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-520986
addons_test.go:177: (dbg) Done: out/minikube-linux-amd64 stop -p addons-520986: (1m27.458767394s)
addons_test.go:181: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-520986
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-520986
addons_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-520986
--- PASS: TestAddons/StoppedEnableDisable (87.67s)

                                                
                                    
x
+
TestCertOptions (85.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-584249 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E1209 03:10:18.661155  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-584249 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m23.962903607s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-584249 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-584249 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-584249 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-584249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-584249
--- PASS: TestCertOptions (85.22s)

                                                
                                    
x
+
TestCertExpiration (297.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-302584 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-302584 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m21.526953439s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-302584 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-302584 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (35.002729835s)
helpers_test.go:175: Cleaning up "cert-expiration-302584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-302584
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-302584: (1.036652845s)
--- PASS: TestCertExpiration (297.57s)

                                                
                                    
x
+
TestForceSystemdFlag (65.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-755282 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-755282 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m4.157304091s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-755282 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-755282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-755282
E1209 03:10:01.731687  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestForceSystemdFlag (65.28s)

                                                
                                    
x
+
TestForceSystemdEnv (65.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-671834 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-671834 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m3.928210996s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-671834 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-671834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-671834
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-671834: (1.011474219s)
--- PASS: TestForceSystemdEnv (65.14s)

                                                
                                    
x
+
TestErrorSpam/setup (43.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-417514 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-417514 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-417514 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-417514 --driver=kvm2  --container-runtime=containerd: (43.708025828s)
--- PASS: TestErrorSpam/setup (43.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (4.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 stop: (1.650148603s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 stop: (1.287195166s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-417514 --log_dir /tmp/nospam-417514 stop: (1.866136536s)
--- PASS: TestErrorSpam/stop (4.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22081-785489/.minikube/files/etc/test/nested/copy/789441/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-804291 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E1209 02:07:55.205817  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:07:55.212239  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:07:55.223671  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:07:55.245169  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:07:55.286660  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:07:55.368275  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:07:55.529887  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:07:55.851668  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:07:56.493744  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:07:57.775421  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:00.338306  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:05.460528  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:15.702332  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:36.183858  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-804291 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m18.029472338s)
--- PASS: TestFunctional/serial/StartWithProxy (78.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (46.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 02:08:39.910553  789441 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-804291 --alsologtostderr -v=8
E1209 02:09:17.145982  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-804291 --alsologtostderr -v=8: (46.304041458s)
functional_test.go:678: soft start took 46.304879371s for "functional-804291" cluster.
I1209 02:09:26.215012  789441 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (46.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-804291 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-804291 cache add registry.k8s.io/pause:3.3: (1.039385402s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-804291 /tmp/TestFunctionalserialCacheCmdcacheadd_local337648010/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 cache add minikube-local-cache-test:functional-804291
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 cache delete minikube-local-cache-test:functional-804291
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-804291
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-804291 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (180.697249ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 kubectl -- --context functional-804291 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-804291 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-804291 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-804291 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.425990892s)
functional_test.go:776: restart took 38.426117278s for "functional-804291" cluster.
I1209 02:10:10.992984  789441 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (38.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-804291 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-804291 logs: (1.217198805s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 logs --file /tmp/TestFunctionalserialLogsFileCmd1478842355/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-804291 logs --file /tmp/TestFunctionalserialLogsFileCmd1478842355/001/logs.txt: (1.251347661s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-804291 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-804291
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-804291: exit status 115 (554.732443ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.95:30609 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-804291 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-804291 config get cpus: exit status 14 (69.238994ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-804291 config get cpus: exit status 14 (67.851077ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-804291 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-804291 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 798790: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-804291 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-804291 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (139.300218ms)

                                                
                                                
-- stdout --
	* [functional-804291] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:10:20.962463  798712 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:10:20.962777  798712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:10:20.962787  798712 out.go:374] Setting ErrFile to fd 2...
	I1209 02:10:20.962792  798712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:10:20.962974  798712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 02:10:20.963499  798712 out.go:368] Setting JSON to false
	I1209 02:10:20.964533  798712 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":28371,"bootTime":1765217850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:10:20.964616  798712 start.go:143] virtualization: kvm guest
	I1209 02:10:20.966514  798712 out.go:179] * [functional-804291] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:10:20.968093  798712 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:10:20.968089  798712 notify.go:221] Checking for updates...
	I1209 02:10:20.970960  798712 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:10:20.972620  798712 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 02:10:20.973736  798712 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 02:10:20.975048  798712 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:10:20.976507  798712 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:10:20.978596  798712 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 02:10:20.979411  798712 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:10:21.015422  798712 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 02:10:21.016702  798712 start.go:309] selected driver: kvm2
	I1209 02:10:21.016724  798712 start.go:927] validating driver "kvm2" against &{Name:functional-804291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-804291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:10:21.016879  798712 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:10:21.019158  798712 out.go:203] 
	W1209 02:10:21.020306  798712 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 02:10:21.021357  798712 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-804291 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-804291 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-804291 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (152.83259ms)

                                                
                                                
-- stdout --
	* [functional-804291] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:10:20.810188  798697 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:10:20.810351  798697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:10:20.810365  798697 out.go:374] Setting ErrFile to fd 2...
	I1209 02:10:20.810373  798697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:10:20.810834  798697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 02:10:20.811473  798697 out.go:368] Setting JSON to false
	I1209 02:10:20.812773  798697 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":28371,"bootTime":1765217850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:10:20.812869  798697 start.go:143] virtualization: kvm guest
	I1209 02:10:20.814958  798697 out.go:179] * [functional-804291] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1209 02:10:20.816391  798697 notify.go:221] Checking for updates...
	I1209 02:10:20.817684  798697 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:10:20.818876  798697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:10:20.820151  798697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 02:10:20.821344  798697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 02:10:20.822484  798697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:10:20.823645  798697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:10:20.825589  798697 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 02:10:20.826421  798697 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:10:20.873020  798697 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1209 02:10:20.874049  798697 start.go:309] selected driver: kvm2
	I1209 02:10:20.874069  798697 start.go:927] validating driver "kvm2" against &{Name:functional-804291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-804291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:10:20.874215  798697 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:10:20.876607  798697 out.go:203] 
	W1209 02:10:20.877686  798697 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 02:10:20.879247  798697 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (349.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-804291 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-804291 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-pxdq8" [de1db751-2380-41a3-be2b-2533a62d4103] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-pxdq8" [de1db751-2380-41a3-be2b-2533a62d4103] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 5m49.003964464s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.95:31877
functional_test.go:1680: http://192.168.39.95:31877: success! body:
Request served by hello-node-connect-7d85dfc575-pxdq8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.95:31877
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (349.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [0ad2d0de-e709-4995-b0b2-fdd6caac3fd2] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004720381s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-804291 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-804291 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-804291 get pvc myclaim -o=json
I1209 02:10:25.969973  789441 retry.go:31] will retry after 2.80042558s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:891a8735-b90c-46be-bed0-6d3f4f044566 ResourceVersion:771 Generation:0 CreationTimestamp:2025-12-09 02:10:25 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c58e90 VolumeMode:0xc001c58ea0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-804291 get pvc myclaim -o=json
I1209 02:10:28.840743  789441 retry.go:31] will retry after 2.65620048s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:891a8735-b90c-46be-bed0-6d3f4f044566 ResourceVersion:771 Generation:0 CreationTimestamp:2025-12-09 02:10:25 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00195c5f0 VolumeMode:0xc00195c600 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-804291 get pvc myclaim -o=json
I1209 02:10:31.797333  789441 retry.go:31] will retry after 5.828005043s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:891a8735-b90c-46be-bed0-6d3f4f044566 ResourceVersion:771 Generation:0 CreationTimestamp:2025-12-09 02:10:25 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00195ca60 VolumeMode:0xc00195ca70 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-804291 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-804291 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:10:37.864374  789441 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4fb9b00b-c0fb-4e87-86b8-3412691950ef] Pending
helpers_test.go:352: "sp-pod" [4fb9b00b-c0fb-4e87-86b8-3412691950ef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1209 02:10:39.068042  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
2025/12/09 02:10:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "sp-pod" [4fb9b00b-c0fb-4e87-86b8-3412691950ef] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004816423s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-804291 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-804291 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-804291 delete -f testdata/storage-provisioner/pod.yaml: (1.247467823s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-804291 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:10:53.372818  789441 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ff18c033-fe19-480f-b292-2ff9e5ce44f6] Pending
helpers_test.go:352: "sp-pod" [ff18c033-fe19-480f-b292-2ff9e5ce44f6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.038364563s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-804291 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.85s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh -n functional-804291 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 cp functional-804291:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd811271858/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh -n functional-804291 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh -n functional-804291 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-804291 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-6bcdcbc558-b9csv" [cb831199-85c5-4efd-afda-6aa722b6d8cf] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-6bcdcbc558-b9csv" [cb831199-85c5-4efd-afda-6aa722b6d8cf] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.005893713s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-804291 exec mysql-6bcdcbc558-b9csv -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-804291 exec mysql-6bcdcbc558-b9csv -- mysql -ppassword -e "show databases;": exit status 1 (252.787285ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:10:41.919092  789441 retry.go:31] will retry after 609.79509ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-804291 exec mysql-6bcdcbc558-b9csv -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-804291 exec mysql-6bcdcbc558-b9csv -- mysql -ppassword -e "show databases;": exit status 1 (141.022607ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:10:42.670431  789441 retry.go:31] will retry after 1.013205647s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-804291 exec mysql-6bcdcbc558-b9csv -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-804291 exec mysql-6bcdcbc558-b9csv -- mysql -ppassword -e "show databases;": exit status 1 (280.1856ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:10:43.964979  789441 retry.go:31] will retry after 2.079931701s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-804291 exec mysql-6bcdcbc558-b9csv -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-804291 exec mysql-6bcdcbc558-b9csv -- mysql -ppassword -e "show databases;": exit status 1 (209.798538ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:10:46.255818  789441 retry.go:31] will retry after 3.733555562s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-804291 exec mysql-6bcdcbc558-b9csv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/789441/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo cat /etc/test/nested/copy/789441/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/789441.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo cat /etc/ssl/certs/789441.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/789441.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo cat /usr/share/ca-certificates/789441.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7894412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo cat /etc/ssl/certs/7894412.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7894412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo cat /usr/share/ca-certificates/7894412.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-804291 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-804291 ssh "sudo systemctl is-active docker": exit status 1 (191.745681ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-804291 ssh "sudo systemctl is-active crio": exit status 1 (192.488784ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-804291 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-804291
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-804291
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-804291 image ls --format short --alsologtostderr:
I1209 02:10:59.260734  799715 out.go:360] Setting OutFile to fd 1 ...
I1209 02:10:59.261003  799715 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:10:59.261015  799715 out.go:374] Setting ErrFile to fd 2...
I1209 02:10:59.261019  799715 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:10:59.261277  799715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:10:59.261857  799715 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 02:10:59.261970  799715 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 02:10:59.263996  799715 ssh_runner.go:195] Run: systemctl --version
I1209 02:10:59.266119  799715 main.go:143] libmachine: domain functional-804291 has defined MAC address 52:54:00:b6:b5:96 in network mk-functional-804291
I1209 02:10:59.266481  799715 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:b5:96", ip: ""} in network mk-functional-804291: {Iface:virbr1 ExpiryTime:2025-12-09 03:07:37 +0000 UTC Type:0 Mac:52:54:00:b6:b5:96 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-804291 Clientid:01:52:54:00:b6:b5:96}
I1209 02:10:59.266507  799715 main.go:143] libmachine: domain functional-804291 has defined IP address 192.168.39.95 and MAC address 52:54:00:b6:b5:96 in network mk-functional-804291
I1209 02:10:59.266656  799715 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-804291/id_rsa Username:docker}
I1209 02:10:59.357615  799715 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-804291 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ public.ecr.aws/docker/library/mysql         │ 8.4                │ sha256:20d0be │ 233MB  │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:d4918c │ 22.6MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kicbase/echo-server               │ functional-804291  │ sha256:9056ab │ 2.37MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:a5f569 │ 27.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:8aa150 │ 26MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:88320b │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ docker.io/library/minikube-local-cache-test │ functional-804291  │ sha256:222a70 │ 991B   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:01e8ba │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-804291 image ls --format table --alsologtostderr:
I1209 02:10:59.904756  799779 out.go:360] Setting OutFile to fd 1 ...
I1209 02:10:59.904846  799779 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:10:59.904851  799779 out.go:374] Setting ErrFile to fd 2...
I1209 02:10:59.904855  799779 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:10:59.905106  799779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:10:59.905735  799779 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 02:10:59.905849  799779 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 02:10:59.908121  799779 ssh_runner.go:195] Run: systemctl --version
I1209 02:10:59.910369  799779 main.go:143] libmachine: domain functional-804291 has defined MAC address 52:54:00:b6:b5:96 in network mk-functional-804291
I1209 02:10:59.910810  799779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:b5:96", ip: ""} in network mk-functional-804291: {Iface:virbr1 ExpiryTime:2025-12-09 03:07:37 +0000 UTC Type:0 Mac:52:54:00:b6:b5:96 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-804291 Clientid:01:52:54:00:b6:b5:96}
I1209 02:10:59.910832  799779 main.go:143] libmachine: domain functional-804291 has defined IP address 192.168.39.95 and MAC address 52:54:00:b6:b5:96 in network mk-functional-804291
I1209 02:10:59.910991  799779 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-804291/id_rsa Username:docker}
I1209 02:10:59.993055  799779 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-804291 image ls --format json --alsologtostderr:
[{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-804291"],"size":"2372971"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e
40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"17382272"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:222a7069d2bbef084c697410e74e8d4c827acc019506a05a93cb34ca948652ea","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-804291"],"size":"991"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977
a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"233030909"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22621747"},{"id":"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"25963482"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io
/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"27060130"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:01e8bacf0f50095b9b12daf485
979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"22818657"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-804291 image ls --format json --alsologtostderr:
I1209 02:10:59.690331  799757 out.go:360] Setting OutFile to fd 1 ...
I1209 02:10:59.690426  799757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:10:59.690430  799757 out.go:374] Setting ErrFile to fd 2...
I1209 02:10:59.690434  799757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:10:59.690630  799757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:10:59.691171  799757 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 02:10:59.691274  799757 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 02:10:59.693179  799757 ssh_runner.go:195] Run: systemctl --version
I1209 02:10:59.695737  799757 main.go:143] libmachine: domain functional-804291 has defined MAC address 52:54:00:b6:b5:96 in network mk-functional-804291
I1209 02:10:59.696230  799757 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:b5:96", ip: ""} in network mk-functional-804291: {Iface:virbr1 ExpiryTime:2025-12-09 03:07:37 +0000 UTC Type:0 Mac:52:54:00:b6:b5:96 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-804291 Clientid:01:52:54:00:b6:b5:96}
I1209 02:10:59.696257  799757 main.go:143] libmachine: domain functional-804291 has defined IP address 192.168.39.95 and MAC address 52:54:00:b6:b5:96 in network mk-functional-804291
I1209 02:10:59.696405  799757 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-804291/id_rsa Username:docker}
I1209 02:10:59.786498  799757 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-804291 image ls --format yaml --alsologtostderr:
- id: sha256:222a7069d2bbef084c697410e74e8d4c827acc019506a05a93cb34ca948652ea
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-804291
size: "991"
- id: sha256:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "233030909"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "22818657"
- id: sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "25963482"
- id: sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "17382272"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22621747"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "27060130"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-804291
size: "2372971"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-804291 image ls --format yaml --alsologtostderr:
I1209 02:10:59.463908  799726 out.go:360] Setting OutFile to fd 1 ...
I1209 02:10:59.464233  799726 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:10:59.464245  799726 out.go:374] Setting ErrFile to fd 2...
I1209 02:10:59.464253  799726 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:10:59.464445  799726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:10:59.465023  799726 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 02:10:59.465115  799726 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 02:10:59.467004  799726 ssh_runner.go:195] Run: systemctl --version
I1209 02:10:59.469338  799726 main.go:143] libmachine: domain functional-804291 has defined MAC address 52:54:00:b6:b5:96 in network mk-functional-804291
I1209 02:10:59.469734  799726 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:b5:96", ip: ""} in network mk-functional-804291: {Iface:virbr1 ExpiryTime:2025-12-09 03:07:37 +0000 UTC Type:0 Mac:52:54:00:b6:b5:96 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-804291 Clientid:01:52:54:00:b6:b5:96}
I1209 02:10:59.469763  799726 main.go:143] libmachine: domain functional-804291 has defined IP address 192.168.39.95 and MAC address 52:54:00:b6:b5:96 in network mk-functional-804291
I1209 02:10:59.469932  799726 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-804291/id_rsa Username:docker}
I1209 02:10:59.568934  799726 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-804291 ssh pgrep buildkitd: exit status 1 (169.327142ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image build -t localhost/my-image:functional-804291 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-804291 image build -t localhost/my-image:functional-804291 testdata/build --alsologtostderr: (2.030777209s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-804291 image build -t localhost/my-image:functional-804291 testdata/build --alsologtostderr:
I1209 02:10:59.773753  799768 out.go:360] Setting OutFile to fd 1 ...
I1209 02:10:59.773885  799768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:10:59.773893  799768 out.go:374] Setting ErrFile to fd 2...
I1209 02:10:59.773897  799768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:10:59.774079  799768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:10:59.774700  799768 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 02:10:59.775440  799768 config.go:182] Loaded profile config "functional-804291": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 02:10:59.777584  799768 ssh_runner.go:195] Run: systemctl --version
I1209 02:10:59.779678  799768 main.go:143] libmachine: domain functional-804291 has defined MAC address 52:54:00:b6:b5:96 in network mk-functional-804291
I1209 02:10:59.780081  799768 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:b5:96", ip: ""} in network mk-functional-804291: {Iface:virbr1 ExpiryTime:2025-12-09 03:07:37 +0000 UTC Type:0 Mac:52:54:00:b6:b5:96 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-804291 Clientid:01:52:54:00:b6:b5:96}
I1209 02:10:59.780106  799768 main.go:143] libmachine: domain functional-804291 has defined IP address 192.168.39.95 and MAC address 52:54:00:b6:b5:96 in network mk-functional-804291
I1209 02:10:59.780256  799768 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-804291/id_rsa Username:docker}
I1209 02:10:59.868880  799768 build_images.go:162] Building image from path: /tmp/build.481304317.tar
I1209 02:10:59.868957  799768 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 02:10:59.885440  799768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.481304317.tar
I1209 02:10:59.891058  799768 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.481304317.tar: stat -c "%s %y" /var/lib/minikube/build/build.481304317.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.481304317.tar': No such file or directory
I1209 02:10:59.891091  799768 ssh_runner.go:362] scp /tmp/build.481304317.tar --> /var/lib/minikube/build/build.481304317.tar (3072 bytes)
I1209 02:10:59.930172  799768 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.481304317
I1209 02:10:59.942011  799768 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.481304317 -xf /var/lib/minikube/build/build.481304317.tar
I1209 02:10:59.954868  799768 containerd.go:394] Building image: /var/lib/minikube/build/build.481304317
I1209 02:10:59.954939  799768 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.481304317 --local dockerfile=/var/lib/minikube/build/build.481304317 --output type=image,name=localhost/my-image:functional-804291
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:2afb8200be1eed269b903965e8bbe8f4070dcfdc8c78b31bac2058706b3c858c
#8 exporting manifest sha256:2afb8200be1eed269b903965e8bbe8f4070dcfdc8c78b31bac2058706b3c858c 0.0s done
#8 exporting config sha256:0a5bc9fe888f4001a7e93f4ff70416ff97244da07ffa300c766fe57050b97f53 0.0s done
#8 naming to localhost/my-image:functional-804291 done
#8 DONE 0.2s
I1209 02:11:01.680069  799768 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.481304317 --local dockerfile=/var/lib/minikube/build/build.481304317 --output type=image,name=localhost/my-image:functional-804291: (1.725084962s)
I1209 02:11:01.680160  799768 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.481304317
I1209 02:11:01.712483  799768 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.481304317.tar
I1209 02:11:01.734908  799768 build_images.go:218] Built localhost/my-image:functional-804291 from /tmp/build.481304317.tar
I1209 02:11:01.734955  799768 build_images.go:134] succeeded building to: functional-804291
I1209 02:11:01.734960  799768 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image ls
E1209 02:12:55.204460  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:13:22.910211  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-804291
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image load --daemon kicbase/echo-server:functional-804291 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-804291 image load --daemon kicbase/echo-server:functional-804291 --alsologtostderr: (1.256011851s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image load --daemon kicbase/echo-server:functional-804291 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-804291 image load --daemon kicbase/echo-server:functional-804291 --alsologtostderr: (2.303684342s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-804291
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image load --daemon kicbase/echo-server:functional-804291 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-804291 image load --daemon kicbase/echo-server:functional-804291 --alsologtostderr: (1.108832327s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image save kicbase/echo-server:functional-804291 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image rm kicbase/echo-server:functional-804291 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-804291
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 image save --daemon kicbase/echo-server:functional-804291 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-804291
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (357.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-804291 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-804291 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-p8gkk" [bf631a30-b3cc-4125-a1e1-8349619bf4aa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-p8gkk" [bf631a30-b3cc-4125-a1e1-8349619bf4aa] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 5m57.005365662s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (357.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "251.074981ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.838928ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "256.588461ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.297ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdany-port4188919409/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765246251082809260" to /tmp/TestFunctionalparallelMountCmdany-port4188919409/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765246251082809260" to /tmp/TestFunctionalparallelMountCmdany-port4188919409/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765246251082809260" to /tmp/TestFunctionalparallelMountCmdany-port4188919409/001/test-1765246251082809260
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (159.838612ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:10:51.243017  789441 retry.go:31] will retry after 604.127307ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 02:10 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 02:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 02:10 test-1765246251082809260
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh cat /mount-9p/test-1765246251082809260
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-804291 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [f34e8d96-a042-4a6f-8756-5b5c8f0d6e8f] Pending
helpers_test.go:352: "busybox-mount" [f34e8d96-a042-4a6f-8756-5b5c8f0d6e8f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [f34e8d96-a042-4a6f-8756-5b5c8f0d6e8f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [f34e8d96-a042-4a6f-8756-5b5c8f0d6e8f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004225283s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-804291 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdany-port4188919409/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdspecific-port1168100215/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (166.537125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:10:56.416934  789441 retry.go:31] will retry after 453.822276ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdspecific-port1168100215/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-804291 ssh "sudo umount -f /mount-9p": exit status 1 (163.973701ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-804291 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdspecific-port1168100215/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3715232920/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3715232920/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3715232920/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T" /mount1: exit status 1 (193.223462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:10:57.756298  789441 retry.go:31] will retry after 677.841617ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-804291 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3715232920/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3715232920/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-804291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3715232920/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-804291 service list: (1.202475324s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-804291 service list -o json: (1.202544875s)
functional_test.go:1504: Took "1.202640438s" to run "out/minikube-linux-amd64 -p functional-804291 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.95:30750
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-804291 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.95:30750
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-804291
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-804291
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-804291
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22081-785489/.minikube/files/etc/test/nested/copy/789441/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (76.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-230202 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1209 02:17:55.205165  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-230202 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (1m16.719669837s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (76.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (40.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1209 02:17:59.316766  789441 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-230202 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-230202 --alsologtostderr -v=8: (40.266012293s)
functional_test.go:678: soft start took 40.266357345s for "functional-230202" cluster.
I1209 02:18:39.583199  789441 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (40.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-230202 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-230202 cache add registry.k8s.io/pause:3.3: (1.24563341s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach327141451/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 cache add minikube-local-cache-test:functional-230202
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 cache delete minikube-local-cache-test:functional-230202
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-230202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-230202 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (174.024033ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 kubectl -- --context functional-230202 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-230202 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (47.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-230202 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-230202 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.562734228s)
functional_test.go:776: restart took 47.562857658s for "functional-230202" cluster.
I1209 02:19:33.658376  789441 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (47.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-230202 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-230202 logs: (1.325571078s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2895765590/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-230202 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2895765590/001/logs.txt: (1.316247084s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-230202 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-230202
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-230202: exit status 115 (248.017127ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.49:31768 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-230202 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-230202 config get cpus: exit status 14 (85.216241ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-230202 config get cpus: exit status 14 (68.32086ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-230202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-230202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (126.994227ms)

                                                
                                                
-- stdout --
	* [functional-230202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:19:41.394269  802926 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:19:41.394582  802926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:19:41.394594  802926 out.go:374] Setting ErrFile to fd 2...
	I1209 02:19:41.394601  802926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:19:41.394884  802926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 02:19:41.395421  802926 out.go:368] Setting JSON to false
	I1209 02:19:41.396676  802926 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":28931,"bootTime":1765217850,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:19:41.396764  802926 start.go:143] virtualization: kvm guest
	I1209 02:19:41.400757  802926 out.go:179] * [functional-230202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:19:41.402016  802926 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:19:41.402036  802926 notify.go:221] Checking for updates...
	I1209 02:19:41.405009  802926 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:19:41.406168  802926 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 02:19:41.407759  802926 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 02:19:41.409216  802926 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:19:41.410550  802926 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:19:41.412166  802926 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 02:19:41.412864  802926 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:19:41.446831  802926 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 02:19:41.447994  802926 start.go:309] selected driver: kvm2
	I1209 02:19:41.448012  802926 start.go:927] validating driver "kvm2" against &{Name:functional-230202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-230202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:19:41.448122  802926 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:19:41.450085  802926 out.go:203] 
	W1209 02:19:41.451277  802926 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 02:19:41.452238  802926 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-230202 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-230202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-230202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (131.393286ms)

                                                
                                                
-- stdout --
	* [functional-230202] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:19:41.270994  802908 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:19:41.271094  802908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:19:41.271098  802908 out.go:374] Setting ErrFile to fd 2...
	I1209 02:19:41.271103  802908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:19:41.271434  802908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 02:19:41.272011  802908 out.go:368] Setting JSON to false
	I1209 02:19:41.273341  802908 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":28931,"bootTime":1765217850,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:19:41.273412  802908 start.go:143] virtualization: kvm guest
	I1209 02:19:41.275410  802908 out.go:179] * [functional-230202] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1209 02:19:41.276748  802908 notify.go:221] Checking for updates...
	I1209 02:19:41.276767  802908 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:19:41.277994  802908 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:19:41.279533  802908 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 02:19:41.281156  802908 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 02:19:41.282934  802908 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:19:41.284377  802908 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:19:41.286313  802908 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 02:19:41.286817  802908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:19:41.319290  802908 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1209 02:19:41.320372  802908 start.go:309] selected driver: kvm2
	I1209 02:19:41.320389  802908 start.go:927] validating driver "kvm2" against &{Name:functional-230202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-230202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:19:41.320491  802908 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:19:41.322397  802908 out.go:203] 
	W1209 02:19:41.323626  802908 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 02:19:41.324703  802908 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-230202 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-230202 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-4pf8j" [0a6e1847-f3a9-4303-a8e2-0222ff363506] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-4pf8j" [0a6e1847-f3a9-4303-a8e2-0222ff363506] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003221443s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.49:30967
functional_test.go:1680: http://192.168.39.49:30967: success! body:
Request served by hello-node-connect-9f67c86d4-4pf8j

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.49:30967
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (28.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c0a4c068-79ea-4075-aa3b-c2633a5ceda6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003819197s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-230202 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-230202 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-230202 get pvc myclaim -o=json
I1209 02:19:47.440478  789441 retry.go:31] will retry after 2.203188374s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:f297eb8d-cfa8-4f94-8fb4-65bc79be00a5 ResourceVersion:802 Generation:0 CreationTimestamp:2025-12-09 02:19:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-f297eb8d-cfa8-4f94-8fb4-65bc79be00a5 StorageClassName:0xc001395ef0 VolumeMode:0xc001395f00 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-230202 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-230202 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:19:49.827467  789441 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [32861fcb-327d-4c9b-b419-ca124bcdb650] Pending
helpers_test.go:352: "sp-pod" [32861fcb-327d-4c9b-b419-ca124bcdb650] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [32861fcb-327d-4c9b-b419-ca124bcdb650] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004117174s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-230202 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-230202 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-230202 delete -f testdata/storage-provisioner/pod.yaml: (1.344690459s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-230202 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:20:02.536065  789441 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e5cace0b-2759-43ad-a23f-59e884aeaa35] Pending
helpers_test.go:352: "sp-pod" [e5cace0b-2759-43ad-a23f-59e884aeaa35] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007097241s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-230202 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (28.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh -n functional-230202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 cp functional-230202:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3885307869/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh -n functional-230202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh -n functional-230202 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (36.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-230202 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-7d7b65bc95-6rb2c" [7f547b79-a299-472c-b5ba-7ff61f11efec] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-7d7b65bc95-6rb2c" [7f547b79-a299-472c-b5ba-7ff61f11efec] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 21.010926707s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;": exit status 1 (226.790365ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:20:14.979163  789441 retry.go:31] will retry after 1.492306683s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;": exit status 1 (174.796651ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:20:16.647384  789441 retry.go:31] will retry after 1.046941098s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;": exit status 1 (171.20403ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:20:17.866005  789441 retry.go:31] will retry after 2.662567941s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;": exit status 1 (186.119846ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:20:20.715584  789441 retry.go:31] will retry after 3.45273768s: exit status 1
E1209 02:20:21.230231  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:20:23.793204  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;": exit status 1 (135.947086ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:20:24.305097  789441 retry.go:31] will retry after 6.033522664s: exit status 1
E1209 02:20:28.915182  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-230202 exec mysql-7d7b65bc95-6rb2c -- mysql -ppassword -e "show databases;"
E1209 02:20:39.156591  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:20:59.638345  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:40.600590  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:22:55.207595  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:23:02.523558  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:24:18.271964  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (36.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/789441/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo cat /etc/test/nested/copy/789441/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/789441.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo cat /etc/ssl/certs/789441.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/789441.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo cat /usr/share/ca-certificates/789441.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7894412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo cat /etc/ssl/certs/7894412.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7894412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo cat /usr/share/ca-certificates/7894412.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-230202 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-230202 ssh "sudo systemctl is-active docker": exit status 1 (165.430766ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-230202 ssh "sudo systemctl is-active crio": exit status 1 (163.521258ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-230202 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-230202 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-xgbzb" [4bd9eced-83ba-4e5b-83cd-393a336d30b2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-xgbzb" [4bd9eced-83ba-4e5b-83cd-393a336d30b2] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003534446s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 service list -o json
functional_test.go:1504: Took "805.107888ms" to run "out/minikube-linux-amd64 -p functional-230202 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-230202 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-230202
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-230202
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-230202 image ls --format short --alsologtostderr:
I1209 02:20:11.730883  803851 out.go:360] Setting OutFile to fd 1 ...
I1209 02:20:11.730988  803851 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:20:11.730992  803851 out.go:374] Setting ErrFile to fd 2...
I1209 02:20:11.730997  803851 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:20:11.731192  803851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:20:11.731733  803851 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:20:11.731826  803851 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:20:11.734039  803851 ssh_runner.go:195] Run: systemctl --version
I1209 02:20:11.736626  803851 main.go:143] libmachine: domain functional-230202 has defined MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:20:11.737116  803851 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:54:51", ip: ""} in network mk-functional-230202: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:58 +0000 UTC Type:0 Mac:52:54:00:44:54:51 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-230202 Clientid:01:52:54:00:44:54:51}
I1209 02:20:11.737170  803851 main.go:143] libmachine: domain functional-230202 has defined IP address 192.168.39.49 and MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:20:11.737334  803851 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-230202/id_rsa Username:docker}
I1209 02:20:11.839096  803851 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-230202 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ localhost/my-image                          │ functional-230202  │ sha256:f459e7 │ 775kB  │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:d4918c │ 22.6MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:45f3cc │ 23.1MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ public.ecr.aws/docker/library/mysql         │ 8.4                │ sha256:20d0be │ 233MB  │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:aa9d02 │ 27.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:8a4ded │ 25.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ docker.io/kicbase/echo-server               │ functional-230202  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-230202  │ sha256:222a70 │ 991B   │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:7bb621 │ 17.2MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:aa5e3e │ 23.6MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-230202 image ls --format table --alsologtostderr:
I1209 02:20:15.861446  804002 out.go:360] Setting OutFile to fd 1 ...
I1209 02:20:15.861701  804002 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:20:15.861712  804002 out.go:374] Setting ErrFile to fd 2...
I1209 02:20:15.861719  804002 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:20:15.861947  804002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:20:15.862760  804002 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:20:15.862917  804002 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:20:15.865161  804002 ssh_runner.go:195] Run: systemctl --version
I1209 02:20:15.867249  804002 main.go:143] libmachine: domain functional-230202 has defined MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:20:15.867627  804002 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:54:51", ip: ""} in network mk-functional-230202: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:58 +0000 UTC Type:0 Mac:52:54:00:44:54:51 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-230202 Clientid:01:52:54:00:44:54:51}
I1209 02:20:15.867663  804002 main.go:143] libmachine: domain functional-230202 has defined IP address 192.168.39.49 and MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:20:15.867819  804002 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-230202/id_rsa Username:docker}
I1209 02:20:15.949421  804002 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-230202 image ls --format json --alsologtostderr:
[{"id":"sha256:f459e7cd459180194fbdcbe0aeb702410c08433fa154607f0d7098df84572e89","repoDigests":[],"repoTags":["localhost/my-image:functional-230202"],"size":"774889"},{"id":"sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"23121143"},{"id":"sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"17228488"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pa
use@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"23553139"},{"id":"sha2
56:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"27671920"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-230202"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:222a7069d2bbef084c697410e74e8d4
c827acc019506a05a93cb34ca948652ea","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-230202"],"size":"991"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"233030909"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22621747"},{"id":"sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e5
9605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"25786942"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-230202 image ls --format json --alsologtostderr:
I1209 02:20:15.645052  803991 out.go:360] Setting OutFile to fd 1 ...
I1209 02:20:15.645164  803991 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:20:15.645169  803991 out.go:374] Setting ErrFile to fd 2...
I1209 02:20:15.645173  803991 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:20:15.645353  803991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:20:15.645936  803991 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:20:15.646033  803991 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:20:15.648301  803991 ssh_runner.go:195] Run: systemctl --version
I1209 02:20:15.650735  803991 main.go:143] libmachine: domain functional-230202 has defined MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:20:15.651213  803991 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:54:51", ip: ""} in network mk-functional-230202: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:58 +0000 UTC Type:0 Mac:52:54:00:44:54:51 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-230202 Clientid:01:52:54:00:44:54:51}
I1209 02:20:15.651250  803991 main.go:143] libmachine: domain functional-230202 has defined IP address 192.168.39.49 and MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:20:15.651390  803991 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-230202/id_rsa Username:docker}
I1209 02:20:15.751076  803991 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-230202 image ls --format yaml --alsologtostderr:
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22621747"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "23121143"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:222a7069d2bbef084c697410e74e8d4c827acc019506a05a93cb34ca948652ea
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-230202
size: "991"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "233030909"
- id: sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "27671920"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-230202
size: "2372971"
- id: sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "23553139"
- id: sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "17228488"
- id: sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "25786942"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-230202 image ls --format yaml --alsologtostderr:
I1209 02:20:11.964826  803862 out.go:360] Setting OutFile to fd 1 ...
I1209 02:20:11.964972  803862 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:20:11.964984  803862 out.go:374] Setting ErrFile to fd 2...
I1209 02:20:11.964991  803862 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:20:11.965289  803862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:20:11.966156  803862 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:20:11.966316  803862 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:20:11.969901  803862 ssh_runner.go:195] Run: systemctl --version
I1209 02:20:11.972858  803862 main.go:143] libmachine: domain functional-230202 has defined MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:20:11.973491  803862 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:54:51", ip: ""} in network mk-functional-230202: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:58 +0000 UTC Type:0 Mac:52:54:00:44:54:51 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-230202 Clientid:01:52:54:00:44:54:51}
I1209 02:20:11.973533  803862 main.go:143] libmachine: domain functional-230202 has defined IP address 192.168.39.49 and MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:20:11.973690  803862 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-230202/id_rsa Username:docker}
I1209 02:20:12.056392  803862 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-230202 ssh pgrep buildkitd: exit status 1 (159.715645ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image build -t localhost/my-image:functional-230202 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-230202 image build -t localhost/my-image:functional-230202 testdata/build --alsologtostderr: (3.016208138s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-230202 image build -t localhost/my-image:functional-230202 testdata/build --alsologtostderr:
I1209 02:20:12.355171  803883 out.go:360] Setting OutFile to fd 1 ...
I1209 02:20:12.355376  803883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:20:12.355392  803883 out.go:374] Setting ErrFile to fd 2...
I1209 02:20:12.355399  803883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:20:12.355765  803883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
I1209 02:20:12.356699  803883 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:20:12.357705  803883 config.go:182] Loaded profile config "functional-230202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 02:20:12.360836  803883 ssh_runner.go:195] Run: systemctl --version
I1209 02:20:12.365518  803883 main.go:143] libmachine: domain functional-230202 has defined MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:20:12.366035  803883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:54:51", ip: ""} in network mk-functional-230202: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:58 +0000 UTC Type:0 Mac:52:54:00:44:54:51 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-230202 Clientid:01:52:54:00:44:54:51}
I1209 02:20:12.366064  803883 main.go:143] libmachine: domain functional-230202 has defined IP address 192.168.39.49 and MAC address 52:54:00:44:54:51 in network mk-functional-230202
I1209 02:20:12.366239  803883 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/functional-230202/id_rsa Username:docker}
I1209 02:20:12.465676  803883 build_images.go:162] Building image from path: /tmp/build.1995574254.tar
I1209 02:20:12.465748  803883 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 02:20:12.481545  803883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1995574254.tar
I1209 02:20:12.488644  803883 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1995574254.tar: stat -c "%s %y" /var/lib/minikube/build/build.1995574254.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1995574254.tar': No such file or directory
I1209 02:20:12.488688  803883 ssh_runner.go:362] scp /tmp/build.1995574254.tar --> /var/lib/minikube/build/build.1995574254.tar (3072 bytes)
I1209 02:20:12.530206  803883 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1995574254
I1209 02:20:12.548202  803883 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1995574254 -xf /var/lib/minikube/build/build.1995574254.tar
I1209 02:20:12.564342  803883 containerd.go:394] Building image: /var/lib/minikube/build/build.1995574254
I1209 02:20:12.564422  803883 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1995574254 --local dockerfile=/var/lib/minikube/build/build.1995574254 --output type=image,name=localhost/my-image:functional-230202
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:0c514c565c859ad35c8662d558b737b9a543298e0d6b3d9d09b295aaae8b1f58
#8 exporting manifest sha256:0c514c565c859ad35c8662d558b737b9a543298e0d6b3d9d09b295aaae8b1f58 0.0s done
#8 exporting config sha256:f459e7cd459180194fbdcbe0aeb702410c08433fa154607f0d7098df84572e89 0.0s done
#8 naming to localhost/my-image:functional-230202 done
#8 DONE 0.4s
I1209 02:20:15.241746  803883 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1995574254 --local dockerfile=/var/lib/minikube/build/build.1995574254 --output type=image,name=localhost/my-image:functional-230202: (2.67728514s)
I1209 02:20:15.241838  803883 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1995574254
I1209 02:20:15.264537  803883 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1995574254.tar
I1209 02:20:15.292978  803883 build_images.go:218] Built localhost/my-image:functional-230202 from /tmp/build.1995574254.tar
I1209 02:20:15.293018  803883 build_images.go:134] succeeded building to: functional-230202
I1209 02:20:15.293025  803883 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-230202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.49:32510
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image load --daemon kicbase/echo-server:functional-230202 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-230202 image load --daemon kicbase/echo-server:functional-230202 --alsologtostderr: (1.231277616s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.49:32510
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image load --daemon kicbase/echo-server:functional-230202 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-230202 image load --daemon kicbase/echo-server:functional-230202 --alsologtostderr: (1.105066311s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-230202
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image load --daemon kicbase/echo-server:functional-230202 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image save kicbase/echo-server:functional-230202 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image rm kicbase/echo-server:functional-230202 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-230202 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr: (1.283038439s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-230202
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 image save --daemon kicbase/echo-server:functional-230202 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-230202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (16.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4085335677/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765246798886127631" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4085335677/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765246798886127631" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4085335677/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765246798886127631" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4085335677/001/test-1765246798886127631
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.145627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:19:59.089686  789441 retry.go:31] will retry after 603.634747ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 02:19 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 02:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 02:19 test-1765246798886127631
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh cat /mount-9p/test-1765246798886127631
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-230202 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b154ad62-fed9-49c4-8664-e7568230b285] Pending
helpers_test.go:352: "busybox-mount" [b154ad62-fed9-49c4-8664-e7568230b285] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b154ad62-fed9-49c4-8664-e7568230b285] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b154ad62-fed9-49c4-8664-e7568230b285] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.011919475s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-230202 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4085335677/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (16.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "271.367622ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "69.451827ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "275.608586ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "73.837848ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4119700011/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (227.956428ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:20:15.520982  789441 retry.go:31] will retry after 666.515607ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4119700011/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-230202 ssh "sudo umount -f /mount-9p": exit status 1 (178.28334ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-230202 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4119700011/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4092420648/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4092420648/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4092420648/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T" /mount1: exit status 1 (180.714892ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:20:17.082503  789441 retry.go:31] will retry after 348.087002ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-230202 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-230202 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4092420648/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4092420648/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-230202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4092420648/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
E1209 02:20:18.660864  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:20:18.667262  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:20:18.678655  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:20:18.700030  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:20:18.741419  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:20:18.822858  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:20:18.984407  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:20:19.306154  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:20:19.948219  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-230202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-230202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-230202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd
E1209 02:25:18.663031  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:25:46.365533  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:27:55.205634  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd: (3m26.594665407s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (207.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 kubectl -- rollout status deployment/busybox: (2.356963522s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-k6bv6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-kfhbh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-mrn76 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-k6bv6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-kfhbh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-mrn76 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-k6bv6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-kfhbh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-mrn76 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-k6bv6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-k6bv6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-kfhbh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-kfhbh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-mrn76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 kubectl -- exec busybox-7b57f96db7-mrn76 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 node add --alsologtostderr -v 5: (44.151576387s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-825279 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp testdata/cp-test.txt ha-825279:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile949468697/001/cp-test_ha-825279.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279:/home/docker/cp-test.txt ha-825279-m02:/home/docker/cp-test_ha-825279_ha-825279-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m02 "sudo cat /home/docker/cp-test_ha-825279_ha-825279-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279:/home/docker/cp-test.txt ha-825279-m03:/home/docker/cp-test_ha-825279_ha-825279-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m03 "sudo cat /home/docker/cp-test_ha-825279_ha-825279-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279:/home/docker/cp-test.txt ha-825279-m04:/home/docker/cp-test_ha-825279_ha-825279-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m04 "sudo cat /home/docker/cp-test_ha-825279_ha-825279-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp testdata/cp-test.txt ha-825279-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile949468697/001/cp-test_ha-825279-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m02:/home/docker/cp-test.txt ha-825279:/home/docker/cp-test_ha-825279-m02_ha-825279.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279 "sudo cat /home/docker/cp-test_ha-825279-m02_ha-825279.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m02:/home/docker/cp-test.txt ha-825279-m03:/home/docker/cp-test_ha-825279-m02_ha-825279-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m03 "sudo cat /home/docker/cp-test_ha-825279-m02_ha-825279-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m02:/home/docker/cp-test.txt ha-825279-m04:/home/docker/cp-test_ha-825279-m02_ha-825279-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m04 "sudo cat /home/docker/cp-test_ha-825279-m02_ha-825279-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp testdata/cp-test.txt ha-825279-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile949468697/001/cp-test_ha-825279-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m03:/home/docker/cp-test.txt ha-825279:/home/docker/cp-test_ha-825279-m03_ha-825279.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279 "sudo cat /home/docker/cp-test_ha-825279-m03_ha-825279.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m03:/home/docker/cp-test.txt ha-825279-m02:/home/docker/cp-test_ha-825279-m03_ha-825279-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m02 "sudo cat /home/docker/cp-test_ha-825279-m03_ha-825279-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m03:/home/docker/cp-test.txt ha-825279-m04:/home/docker/cp-test_ha-825279-m03_ha-825279-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m04 "sudo cat /home/docker/cp-test_ha-825279-m03_ha-825279-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp testdata/cp-test.txt ha-825279-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile949468697/001/cp-test_ha-825279-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m04:/home/docker/cp-test.txt ha-825279:/home/docker/cp-test_ha-825279-m04_ha-825279.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279 "sudo cat /home/docker/cp-test_ha-825279-m04_ha-825279.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m04:/home/docker/cp-test.txt ha-825279-m02:/home/docker/cp-test_ha-825279-m04_ha-825279-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m02 "sudo cat /home/docker/cp-test_ha-825279-m04_ha-825279-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 cp ha-825279-m04:/home/docker/cp-test.txt ha-825279-m03:/home/docker/cp-test_ha-825279-m04_ha-825279-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 ssh -n ha-825279-m03 "sudo cat /home/docker/cp-test_ha-825279-m04_ha-825279-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (88.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 node stop m02 --alsologtostderr -v 5
E1209 02:29:40.481972  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:40.488463  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:40.499886  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:40.521350  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:40.562952  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:40.644467  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:40.806084  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:41.127807  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:41.769717  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:43.051422  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:45.612924  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:29:50.735053  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:30:00.977045  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:30:18.660928  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:30:21.459002  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 node stop m02 --alsologtostderr -v 5: (1m27.642181024s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-825279 status --alsologtostderr -v 5: exit status 7 (529.828039ms)

                                                
                                                
-- stdout --
	ha-825279
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-825279-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-825279-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-825279-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:30:42.605517  808047 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:30:42.605789  808047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:30:42.605798  808047 out.go:374] Setting ErrFile to fd 2...
	I1209 02:30:42.605802  808047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:30:42.606006  808047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 02:30:42.606189  808047 out.go:368] Setting JSON to false
	I1209 02:30:42.606218  808047 mustload.go:66] Loading cluster: ha-825279
	I1209 02:30:42.606362  808047 notify.go:221] Checking for updates...
	I1209 02:30:42.606654  808047 config.go:182] Loaded profile config "ha-825279": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 02:30:42.606677  808047 status.go:174] checking status of ha-825279 ...
	I1209 02:30:42.609286  808047 status.go:371] ha-825279 host status = "Running" (err=<nil>)
	I1209 02:30:42.609313  808047 host.go:66] Checking if "ha-825279" exists ...
	I1209 02:30:42.612229  808047 main.go:143] libmachine: domain ha-825279 has defined MAC address 52:54:00:d6:d5:23 in network mk-ha-825279
	I1209 02:30:42.612824  808047 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d6:d5:23", ip: ""} in network mk-ha-825279: {Iface:virbr1 ExpiryTime:2025-12-09 03:25:00 +0000 UTC Type:0 Mac:52:54:00:d6:d5:23 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-825279 Clientid:01:52:54:00:d6:d5:23}
	I1209 02:30:42.612876  808047 main.go:143] libmachine: domain ha-825279 has defined IP address 192.168.39.240 and MAC address 52:54:00:d6:d5:23 in network mk-ha-825279
	I1209 02:30:42.613106  808047 host.go:66] Checking if "ha-825279" exists ...
	I1209 02:30:42.613401  808047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:30:42.616281  808047 main.go:143] libmachine: domain ha-825279 has defined MAC address 52:54:00:d6:d5:23 in network mk-ha-825279
	I1209 02:30:42.616755  808047 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d6:d5:23", ip: ""} in network mk-ha-825279: {Iface:virbr1 ExpiryTime:2025-12-09 03:25:00 +0000 UTC Type:0 Mac:52:54:00:d6:d5:23 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-825279 Clientid:01:52:54:00:d6:d5:23}
	I1209 02:30:42.616776  808047 main.go:143] libmachine: domain ha-825279 has defined IP address 192.168.39.240 and MAC address 52:54:00:d6:d5:23 in network mk-ha-825279
	I1209 02:30:42.616944  808047 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/ha-825279/id_rsa Username:docker}
	I1209 02:30:42.708992  808047 ssh_runner.go:195] Run: systemctl --version
	I1209 02:30:42.717244  808047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:30:42.737949  808047 kubeconfig.go:125] found "ha-825279" server: "https://192.168.39.254:8443"
	I1209 02:30:42.737994  808047 api_server.go:166] Checking apiserver status ...
	I1209 02:30:42.738050  808047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:30:42.758763  808047 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup
	W1209 02:30:42.772702  808047 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:30:42.772768  808047 ssh_runner.go:195] Run: ls
	I1209 02:30:42.778368  808047 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1209 02:30:42.785917  808047 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1209 02:30:42.785952  808047 status.go:463] ha-825279 apiserver status = Running (err=<nil>)
	I1209 02:30:42.785979  808047 status.go:176] ha-825279 status: &{Name:ha-825279 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:30:42.785999  808047 status.go:174] checking status of ha-825279-m02 ...
	I1209 02:30:42.787780  808047 status.go:371] ha-825279-m02 host status = "Stopped" (err=<nil>)
	I1209 02:30:42.787804  808047 status.go:384] host is not running, skipping remaining checks
	I1209 02:30:42.787813  808047 status.go:176] ha-825279-m02 status: &{Name:ha-825279-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:30:42.787848  808047 status.go:174] checking status of ha-825279-m03 ...
	I1209 02:30:42.789279  808047 status.go:371] ha-825279-m03 host status = "Running" (err=<nil>)
	I1209 02:30:42.789304  808047 host.go:66] Checking if "ha-825279-m03" exists ...
	I1209 02:30:42.792344  808047 main.go:143] libmachine: domain ha-825279-m03 has defined MAC address 52:54:00:f4:77:20 in network mk-ha-825279
	I1209 02:30:42.792842  808047 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:77:20", ip: ""} in network mk-ha-825279: {Iface:virbr1 ExpiryTime:2025-12-09 03:27:13 +0000 UTC Type:0 Mac:52:54:00:f4:77:20 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-825279-m03 Clientid:01:52:54:00:f4:77:20}
	I1209 02:30:42.792886  808047 main.go:143] libmachine: domain ha-825279-m03 has defined IP address 192.168.39.245 and MAC address 52:54:00:f4:77:20 in network mk-ha-825279
	I1209 02:30:42.793039  808047 host.go:66] Checking if "ha-825279-m03" exists ...
	I1209 02:30:42.793314  808047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:30:42.795615  808047 main.go:143] libmachine: domain ha-825279-m03 has defined MAC address 52:54:00:f4:77:20 in network mk-ha-825279
	I1209 02:30:42.796052  808047 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:77:20", ip: ""} in network mk-ha-825279: {Iface:virbr1 ExpiryTime:2025-12-09 03:27:13 +0000 UTC Type:0 Mac:52:54:00:f4:77:20 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-825279-m03 Clientid:01:52:54:00:f4:77:20}
	I1209 02:30:42.796094  808047 main.go:143] libmachine: domain ha-825279-m03 has defined IP address 192.168.39.245 and MAC address 52:54:00:f4:77:20 in network mk-ha-825279
	I1209 02:30:42.796270  808047 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/ha-825279-m03/id_rsa Username:docker}
	I1209 02:30:42.886235  808047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:30:42.907168  808047 kubeconfig.go:125] found "ha-825279" server: "https://192.168.39.254:8443"
	I1209 02:30:42.907204  808047 api_server.go:166] Checking apiserver status ...
	I1209 02:30:42.907252  808047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:30:42.930166  808047 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W1209 02:30:42.943792  808047 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:30:42.943883  808047 ssh_runner.go:195] Run: ls
	I1209 02:30:42.950203  808047 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1209 02:30:42.955695  808047 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1209 02:30:42.955722  808047 status.go:463] ha-825279-m03 apiserver status = Running (err=<nil>)
	I1209 02:30:42.955734  808047 status.go:176] ha-825279-m03 status: &{Name:ha-825279-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:30:42.955756  808047 status.go:174] checking status of ha-825279-m04 ...
	I1209 02:30:42.957430  808047 status.go:371] ha-825279-m04 host status = "Running" (err=<nil>)
	I1209 02:30:42.957451  808047 host.go:66] Checking if "ha-825279-m04" exists ...
	I1209 02:30:42.959751  808047 main.go:143] libmachine: domain ha-825279-m04 has defined MAC address 52:54:00:60:02:5b in network mk-ha-825279
	I1209 02:30:42.960178  808047 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:02:5b", ip: ""} in network mk-ha-825279: {Iface:virbr1 ExpiryTime:2025-12-09 03:28:34 +0000 UTC Type:0 Mac:52:54:00:60:02:5b Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-825279-m04 Clientid:01:52:54:00:60:02:5b}
	I1209 02:30:42.960209  808047 main.go:143] libmachine: domain ha-825279-m04 has defined IP address 192.168.39.133 and MAC address 52:54:00:60:02:5b in network mk-ha-825279
	I1209 02:30:42.960367  808047 host.go:66] Checking if "ha-825279-m04" exists ...
	I1209 02:30:42.960603  808047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:30:42.963270  808047 main.go:143] libmachine: domain ha-825279-m04 has defined MAC address 52:54:00:60:02:5b in network mk-ha-825279
	I1209 02:30:42.963802  808047 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:02:5b", ip: ""} in network mk-ha-825279: {Iface:virbr1 ExpiryTime:2025-12-09 03:28:34 +0000 UTC Type:0 Mac:52:54:00:60:02:5b Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-825279-m04 Clientid:01:52:54:00:60:02:5b}
	I1209 02:30:42.963846  808047 main.go:143] libmachine: domain ha-825279-m04 has defined IP address 192.168.39.133 and MAC address 52:54:00:60:02:5b in network mk-ha-825279
	I1209 02:30:42.964023  808047 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/ha-825279-m04/id_rsa Username:docker}
	I1209 02:30:43.050001  808047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:30:43.071234  808047 status.go:176] ha-825279-m04 status: &{Name:ha-825279-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (88.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 node start m02 --alsologtostderr -v 5
E1209 02:31:02.422307  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 node start m02 --alsologtostderr -v 5: (31.123159605s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 stop --alsologtostderr -v 5
E1209 02:32:24.343993  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:32:55.203872  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:40.482079  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:35:08.188169  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:35:18.660358  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 stop --alsologtostderr -v 5: (4m17.58895156s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 start --wait true --alsologtostderr -v 5
E1209 02:36:41.727867  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 start --wait true --alsologtostderr -v 5: (2m6.074181797s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 node delete m03 --alsologtostderr -v 5: (6.096624237s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (245.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 stop --alsologtostderr -v 5
E1209 02:37:55.203685  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:39:40.481521  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:40:18.662607  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:40:58.274981  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 stop --alsologtostderr -v 5: (4m5.625565208s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-825279 status --alsologtostderr -v 5: exit status 7 (67.648125ms)

                                                
                                                
-- stdout --
	ha-825279
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-825279-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-825279-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:41:53.381793  811089 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:41:53.381907  811089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:41:53.381914  811089 out.go:374] Setting ErrFile to fd 2...
	I1209 02:41:53.381918  811089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:41:53.382181  811089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 02:41:53.382356  811089 out.go:368] Setting JSON to false
	I1209 02:41:53.382392  811089 mustload.go:66] Loading cluster: ha-825279
	I1209 02:41:53.382558  811089 notify.go:221] Checking for updates...
	I1209 02:41:53.382928  811089 config.go:182] Loaded profile config "ha-825279": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 02:41:53.382952  811089 status.go:174] checking status of ha-825279 ...
	I1209 02:41:53.385280  811089 status.go:371] ha-825279 host status = "Stopped" (err=<nil>)
	I1209 02:41:53.385296  811089 status.go:384] host is not running, skipping remaining checks
	I1209 02:41:53.385300  811089 status.go:176] ha-825279 status: &{Name:ha-825279 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:41:53.385317  811089 status.go:174] checking status of ha-825279-m02 ...
	I1209 02:41:53.386685  811089 status.go:371] ha-825279-m02 host status = "Stopped" (err=<nil>)
	I1209 02:41:53.386700  811089 status.go:384] host is not running, skipping remaining checks
	I1209 02:41:53.386705  811089 status.go:176] ha-825279-m02 status: &{Name:ha-825279-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:41:53.386718  811089 status.go:174] checking status of ha-825279-m04 ...
	I1209 02:41:53.388205  811089 status.go:371] ha-825279-m04 host status = "Stopped" (err=<nil>)
	I1209 02:41:53.388221  811089 status.go:384] host is not running, skipping remaining checks
	I1209 02:41:53.388226  811089 status.go:176] ha-825279-m04 status: &{Name:ha-825279-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (245.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (113.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd
E1209 02:42:55.203799  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd: (1m52.363194368s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (113.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 node add --control-plane --alsologtostderr -v 5
E1209 02:44:40.481408  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-825279 node add --control-plane --alsologtostderr -v 5: (1m15.166712199s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-825279 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-404322 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd
E1209 02:45:18.661747  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:46:03.551881  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-404322 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd: (1m18.777830367s)
--- PASS: TestJSONOutput/start/Command (78.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-404322 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-404322 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-404322 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-404322 --output=json --user=testUser: (6.736799943s)
--- PASS: TestJSONOutput/stop/Command (6.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-629731 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-629731 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.842396ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"11e8419d-300a-4a09-9f7a-3d04f9eee722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-629731] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6a091b3-fbd4-4754-a2b7-c5f7e1fdd59b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22081"}}
	{"specversion":"1.0","id":"0dbc0b7e-46e0-4073-9c30-0d2204f440b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb8f5060-e046-4753-9a6f-c781996a3548","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig"}}
	{"specversion":"1.0","id":"0901057c-3795-4724-b1b3-98b1f00fca3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube"}}
	{"specversion":"1.0","id":"86431d89-8154-4e3c-b73e-d9e2294fe4b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"79fcb52c-b8ce-4565-9640-02fe68e08d71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"03f90606-ec41-4ca9-8a01-33b6784f74e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-629731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-629731
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (82.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-540314 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-540314 --driver=kvm2  --container-runtime=containerd: (38.165323207s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-542548 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-542548 --driver=kvm2  --container-runtime=containerd: (41.467844179s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-540314
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-542548
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-542548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-542548
E1209 02:47:55.204415  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:175: Cleaning up "first-540314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-540314
--- PASS: TestMinikubeProfile (82.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-218548 --memory=3072 --mount-string /tmp/TestMountStartserial882029266/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-218548 --memory=3072 --mount-string /tmp/TestMountStartserial882029266/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (23.288106284s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-218548 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-218548 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-239698 --memory=3072 --mount-string /tmp/TestMountStartserial882029266/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-239698 --memory=3072 --mount-string /tmp/TestMountStartserial882029266/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (23.61301866s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-239698 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-239698 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-218548 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-239698 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-239698 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-239698
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-239698: (1.355068s)
--- PASS: TestMountStart/serial/Stop (1.36s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-239698
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-239698: (20.405749068s)
--- PASS: TestMountStart/serial/RestartStopped (21.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-239698 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-239698 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-976295 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1209 02:49:40.480530  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:50:18.660573  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-976295 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m41.815502865s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-976295 -- rollout status deployment/busybox: (2.065589935s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- exec busybox-7b57f96db7-bx64d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- exec busybox-7b57f96db7-tprs6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- exec busybox-7b57f96db7-bx64d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- exec busybox-7b57f96db7-tprs6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- exec busybox-7b57f96db7-bx64d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- exec busybox-7b57f96db7-tprs6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- exec busybox-7b57f96db7-bx64d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- exec busybox-7b57f96db7-bx64d -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- exec busybox-7b57f96db7-tprs6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976295 -- exec busybox-7b57f96db7-tprs6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-976295 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-976295 -v=5 --alsologtostderr: (42.835034758s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-976295 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp testdata/cp-test.txt multinode-976295:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp multinode-976295:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4285182678/001/cp-test_multinode-976295.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp multinode-976295:/home/docker/cp-test.txt multinode-976295-m02:/home/docker/cp-test_multinode-976295_multinode-976295-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m02 "sudo cat /home/docker/cp-test_multinode-976295_multinode-976295-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp multinode-976295:/home/docker/cp-test.txt multinode-976295-m03:/home/docker/cp-test_multinode-976295_multinode-976295-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m03 "sudo cat /home/docker/cp-test_multinode-976295_multinode-976295-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp testdata/cp-test.txt multinode-976295-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp multinode-976295-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4285182678/001/cp-test_multinode-976295-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp multinode-976295-m02:/home/docker/cp-test.txt multinode-976295:/home/docker/cp-test_multinode-976295-m02_multinode-976295.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295 "sudo cat /home/docker/cp-test_multinode-976295-m02_multinode-976295.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp multinode-976295-m02:/home/docker/cp-test.txt multinode-976295-m03:/home/docker/cp-test_multinode-976295-m02_multinode-976295-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m03 "sudo cat /home/docker/cp-test_multinode-976295-m02_multinode-976295-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp testdata/cp-test.txt multinode-976295-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp multinode-976295-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4285182678/001/cp-test_multinode-976295-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp multinode-976295-m03:/home/docker/cp-test.txt multinode-976295:/home/docker/cp-test_multinode-976295-m03_multinode-976295.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295 "sudo cat /home/docker/cp-test_multinode-976295-m03_multinode-976295.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 cp multinode-976295-m03:/home/docker/cp-test.txt multinode-976295-m02:/home/docker/cp-test_multinode-976295-m03_multinode-976295-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 ssh -n multinode-976295-m02 "sudo cat /home/docker/cp-test_multinode-976295-m03_multinode-976295-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-976295 node stop m03: (1.461617355s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-976295 status: exit status 7 (333.392947ms)

                                                
                                                
-- stdout --
	multinode-976295
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-976295-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-976295-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-976295 status --alsologtostderr: exit status 7 (330.292963ms)

                                                
                                                
-- stdout --
	multinode-976295
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-976295-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-976295-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:51:49.456961  816678 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:51:49.457216  816678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:51:49.457224  816678 out.go:374] Setting ErrFile to fd 2...
	I1209 02:51:49.457228  816678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:51:49.457419  816678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 02:51:49.457587  816678 out.go:368] Setting JSON to false
	I1209 02:51:49.457612  816678 mustload.go:66] Loading cluster: multinode-976295
	I1209 02:51:49.457736  816678 notify.go:221] Checking for updates...
	I1209 02:51:49.457952  816678 config.go:182] Loaded profile config "multinode-976295": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 02:51:49.457966  816678 status.go:174] checking status of multinode-976295 ...
	I1209 02:51:49.459951  816678 status.go:371] multinode-976295 host status = "Running" (err=<nil>)
	I1209 02:51:49.459967  816678 host.go:66] Checking if "multinode-976295" exists ...
	I1209 02:51:49.462820  816678 main.go:143] libmachine: domain multinode-976295 has defined MAC address 52:54:00:ea:5d:35 in network mk-multinode-976295
	I1209 02:51:49.463303  816678 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ea:5d:35", ip: ""} in network mk-multinode-976295: {Iface:virbr1 ExpiryTime:2025-12-09 03:49:26 +0000 UTC Type:0 Mac:52:54:00:ea:5d:35 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-976295 Clientid:01:52:54:00:ea:5d:35}
	I1209 02:51:49.463345  816678 main.go:143] libmachine: domain multinode-976295 has defined IP address 192.168.39.102 and MAC address 52:54:00:ea:5d:35 in network mk-multinode-976295
	I1209 02:51:49.463506  816678 host.go:66] Checking if "multinode-976295" exists ...
	I1209 02:51:49.463776  816678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:51:49.465741  816678 main.go:143] libmachine: domain multinode-976295 has defined MAC address 52:54:00:ea:5d:35 in network mk-multinode-976295
	I1209 02:51:49.466178  816678 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ea:5d:35", ip: ""} in network mk-multinode-976295: {Iface:virbr1 ExpiryTime:2025-12-09 03:49:26 +0000 UTC Type:0 Mac:52:54:00:ea:5d:35 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-976295 Clientid:01:52:54:00:ea:5d:35}
	I1209 02:51:49.466204  816678 main.go:143] libmachine: domain multinode-976295 has defined IP address 192.168.39.102 and MAC address 52:54:00:ea:5d:35 in network mk-multinode-976295
	I1209 02:51:49.466334  816678 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/multinode-976295/id_rsa Username:docker}
	I1209 02:51:49.552117  816678 ssh_runner.go:195] Run: systemctl --version
	I1209 02:51:49.559642  816678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:51:49.575950  816678 kubeconfig.go:125] found "multinode-976295" server: "https://192.168.39.102:8443"
	I1209 02:51:49.575990  816678 api_server.go:166] Checking apiserver status ...
	I1209 02:51:49.576025  816678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:51:49.595346  816678 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1401/cgroup
	W1209 02:51:49.606661  816678 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1401/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:51:49.606733  816678 ssh_runner.go:195] Run: ls
	I1209 02:51:49.612381  816678 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1209 02:51:49.616983  816678 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1209 02:51:49.617006  816678 status.go:463] multinode-976295 apiserver status = Running (err=<nil>)
	I1209 02:51:49.617019  816678 status.go:176] multinode-976295 status: &{Name:multinode-976295 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:51:49.617043  816678 status.go:174] checking status of multinode-976295-m02 ...
	I1209 02:51:49.618667  816678 status.go:371] multinode-976295-m02 host status = "Running" (err=<nil>)
	I1209 02:51:49.618685  816678 host.go:66] Checking if "multinode-976295-m02" exists ...
	I1209 02:51:49.621172  816678 main.go:143] libmachine: domain multinode-976295-m02 has defined MAC address 52:54:00:bf:27:6d in network mk-multinode-976295
	I1209 02:51:49.621536  816678 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bf:27:6d", ip: ""} in network mk-multinode-976295: {Iface:virbr1 ExpiryTime:2025-12-09 03:50:25 +0000 UTC Type:0 Mac:52:54:00:bf:27:6d Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:multinode-976295-m02 Clientid:01:52:54:00:bf:27:6d}
	I1209 02:51:49.621558  816678 main.go:143] libmachine: domain multinode-976295-m02 has defined IP address 192.168.39.113 and MAC address 52:54:00:bf:27:6d in network mk-multinode-976295
	I1209 02:51:49.621734  816678 host.go:66] Checking if "multinode-976295-m02" exists ...
	I1209 02:51:49.621935  816678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:51:49.623737  816678 main.go:143] libmachine: domain multinode-976295-m02 has defined MAC address 52:54:00:bf:27:6d in network mk-multinode-976295
	I1209 02:51:49.624055  816678 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bf:27:6d", ip: ""} in network mk-multinode-976295: {Iface:virbr1 ExpiryTime:2025-12-09 03:50:25 +0000 UTC Type:0 Mac:52:54:00:bf:27:6d Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:multinode-976295-m02 Clientid:01:52:54:00:bf:27:6d}
	I1209 02:51:49.624074  816678 main.go:143] libmachine: domain multinode-976295-m02 has defined IP address 192.168.39.113 and MAC address 52:54:00:bf:27:6d in network mk-multinode-976295
	I1209 02:51:49.624223  816678 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-785489/.minikube/machines/multinode-976295-m02/id_rsa Username:docker}
	I1209 02:51:49.706495  816678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:51:49.723083  816678 status.go:176] multinode-976295-m02 status: &{Name:multinode-976295-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:51:49.723123  816678 status.go:174] checking status of multinode-976295-m03 ...
	I1209 02:51:49.724692  816678 status.go:371] multinode-976295-m03 host status = "Stopped" (err=<nil>)
	I1209 02:51:49.724709  816678 status.go:384] host is not running, skipping remaining checks
	I1209 02:51:49.724714  816678 status.go:176] multinode-976295-m03 status: &{Name:multinode-976295-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-976295 node start m03 -v=5 --alsologtostderr: (34.761021512s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (290.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-976295
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-976295
E1209 02:52:55.206924  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:53:21.729788  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:54:40.481887  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-976295: (2m49.459657363s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-976295 --wait=true -v=5 --alsologtostderr
E1209 02:55:18.662506  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-976295 --wait=true -v=5 --alsologtostderr: (2m0.49582609s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-976295
--- PASS: TestMultiNode/serial/RestartKeepsNodes (290.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-976295 node delete m03: (1.579316721s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (171.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 stop
E1209 02:57:38.276445  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:57:55.207658  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:59:40.481652  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-976295 stop: (2m50.960839039s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-976295 status: exit status 7 (67.080668ms)

                                                
                                                
-- stdout --
	multinode-976295
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-976295-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-976295 status --alsologtostderr: exit status 7 (65.594314ms)

                                                
                                                
-- stdout --
	multinode-976295
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-976295-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:00:08.196908  818948 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:00:08.197232  818948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:00:08.197243  818948 out.go:374] Setting ErrFile to fd 2...
	I1209 03:00:08.197247  818948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:00:08.197434  818948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 03:00:08.197600  818948 out.go:368] Setting JSON to false
	I1209 03:00:08.197625  818948 mustload.go:66] Loading cluster: multinode-976295
	I1209 03:00:08.197780  818948 notify.go:221] Checking for updates...
	I1209 03:00:08.197996  818948 config.go:182] Loaded profile config "multinode-976295": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 03:00:08.198011  818948 status.go:174] checking status of multinode-976295 ...
	I1209 03:00:08.200441  818948 status.go:371] multinode-976295 host status = "Stopped" (err=<nil>)
	I1209 03:00:08.200461  818948 status.go:384] host is not running, skipping remaining checks
	I1209 03:00:08.200467  818948 status.go:176] multinode-976295 status: &{Name:multinode-976295 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 03:00:08.200484  818948 status.go:174] checking status of multinode-976295-m02 ...
	I1209 03:00:08.201825  818948 status.go:371] multinode-976295-m02 host status = "Stopped" (err=<nil>)
	I1209 03:00:08.201839  818948 status.go:384] host is not running, skipping remaining checks
	I1209 03:00:08.201844  818948 status.go:176] multinode-976295-m02 status: &{Name:multinode-976295-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (171.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-976295 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1209 03:00:18.662918  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-976295 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m20.642668315s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976295 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.16s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-976295
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-976295-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-976295-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (83.351791ms)

                                                
                                                
-- stdout --
	* [multinode-976295-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-976295-m02' is duplicated with machine name 'multinode-976295-m02' in profile 'multinode-976295'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-976295-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-976295-m03 --driver=kvm2  --container-runtime=containerd: (41.632097027s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-976295
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-976295: exit status 80 (225.234578ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-976295 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-976295-m03 already exists in multinode-976295-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-976295-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.81s)

                                                
                                    
x
+
TestPreload (141.69s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-094847 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd
E1209 03:02:43.553718  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:02:55.203523  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-094847 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd: (1m32.45021025s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-094847 image pull gcr.io/k8s-minikube/busybox
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-094847
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-094847: (7.604396254s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-094847 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-094847 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (39.65553799s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-094847 image list
helpers_test.go:175: Cleaning up "test-preload-094847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-094847
--- PASS: TestPreload (141.69s)

                                                
                                    
x
+
TestScheduledStopUnix (113.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-882516 --memory=3072 --driver=kvm2  --container-runtime=containerd
E1209 03:04:40.481392  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-882516 --memory=3072 --driver=kvm2  --container-runtime=containerd: (41.768035343s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-882516 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 03:05:17.229440  821116 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:05:17.229597  821116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:05:17.229609  821116 out.go:374] Setting ErrFile to fd 2...
	I1209 03:05:17.229617  821116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:05:17.229878  821116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 03:05:17.230219  821116 out.go:368] Setting JSON to false
	I1209 03:05:17.230335  821116 mustload.go:66] Loading cluster: scheduled-stop-882516
	I1209 03:05:17.231041  821116 config.go:182] Loaded profile config "scheduled-stop-882516": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 03:05:17.231194  821116 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/config.json ...
	I1209 03:05:17.231469  821116 mustload.go:66] Loading cluster: scheduled-stop-882516
	I1209 03:05:17.231639  821116 config.go:182] Loaded profile config "scheduled-stop-882516": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-882516 -n scheduled-stop-882516
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-882516 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 03:05:17.537203  821161 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:05:17.537447  821161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:05:17.537455  821161 out.go:374] Setting ErrFile to fd 2...
	I1209 03:05:17.537459  821161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:05:17.537644  821161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 03:05:17.537911  821161 out.go:368] Setting JSON to false
	I1209 03:05:17.538139  821161 daemonize_unix.go:73] killing process 821151 as it is an old scheduled stop
	I1209 03:05:17.538246  821161 mustload.go:66] Loading cluster: scheduled-stop-882516
	I1209 03:05:17.538585  821161 config.go:182] Loaded profile config "scheduled-stop-882516": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 03:05:17.538809  821161 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/config.json ...
	I1209 03:05:17.539047  821161 mustload.go:66] Loading cluster: scheduled-stop-882516
	I1209 03:05:17.539211  821161 config.go:182] Loaded profile config "scheduled-stop-882516": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1209 03:05:17.544887  789441 retry.go:31] will retry after 120.713µs: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.546083  789441 retry.go:31] will retry after 149.55µs: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.547266  789441 retry.go:31] will retry after 158.594µs: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.548431  789441 retry.go:31] will retry after 463.531µs: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.549573  789441 retry.go:31] will retry after 339.254µs: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.550737  789441 retry.go:31] will retry after 966.147µs: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.551878  789441 retry.go:31] will retry after 1.123702ms: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.554114  789441 retry.go:31] will retry after 1.087283ms: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.556327  789441 retry.go:31] will retry after 3.316723ms: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.560503  789441 retry.go:31] will retry after 3.850456ms: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.564710  789441 retry.go:31] will retry after 8.031702ms: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.572906  789441 retry.go:31] will retry after 12.411956ms: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.586229  789441 retry.go:31] will retry after 9.264887ms: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.596521  789441 retry.go:31] will retry after 24.893282ms: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.621829  789441 retry.go:31] will retry after 39.772388ms: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
I1209 03:05:17.662117  789441 retry.go:31] will retry after 42.76989ms: open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-882516 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1209 03:05:18.660709  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-882516 -n scheduled-stop-882516
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-882516
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-882516 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 03:05:43.291219  821323 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:05:43.291343  821323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:05:43.291352  821323 out.go:374] Setting ErrFile to fd 2...
	I1209 03:05:43.291357  821323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:05:43.291548  821323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 03:05:43.291880  821323 out.go:368] Setting JSON to false
	I1209 03:05:43.291968  821323 mustload.go:66] Loading cluster: scheduled-stop-882516
	I1209 03:05:43.292268  821323 config.go:182] Loaded profile config "scheduled-stop-882516": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 03:05:43.292339  821323 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/scheduled-stop-882516/config.json ...
	I1209 03:05:43.292532  821323 mustload.go:66] Loading cluster: scheduled-stop-882516
	I1209 03:05:43.292687  821323 config.go:182] Loaded profile config "scheduled-stop-882516": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-882516
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-882516: exit status 7 (64.539065ms)

                                                
                                                
-- stdout --
	scheduled-stop-882516
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-882516 -n scheduled-stop-882516
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-882516 -n scheduled-stop-882516: exit status 7 (62.562816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-882516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-882516
--- PASS: TestScheduledStopUnix (113.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (130.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2543235587 start -p running-upgrade-545058 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2543235587 start -p running-upgrade-545058 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd: (1m38.854615818s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-545058 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-545058 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (30.169641934s)
helpers_test.go:175: Cleaning up "running-upgrade-545058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-545058
--- PASS: TestRunningBinaryUpgrade (130.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (148.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-643345 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-643345 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (44.488422874s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-643345
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-643345: (1.615320073s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-643345 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-643345 status --format={{.Host}}: exit status 7 (72.847584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-643345 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-643345 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m16.314651643s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-643345 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-643345 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-643345 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (90.73638ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-643345] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-643345
	    minikube start -p kubernetes-upgrade-643345 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6433452 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-643345 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-643345 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-643345 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (24.688765171s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-643345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-643345
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-643345: (1.157794113s)
--- PASS: TestKubernetesUpgrade (148.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-524889 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-524889 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd: exit status 14 (95.010815ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-524889] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (83.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-524889 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-524889 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m23.004063094s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-524889 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (83.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-524889 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E1209 03:07:55.204364  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-524889 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (39.970594372s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-524889 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-524889 status -o json: exit status 2 (221.24655ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-524889","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-524889
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.253381972 start -p stopped-upgrade-576453 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.253381972 start -p stopped-upgrade-576453 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd: (45.063808799s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.253381972 -p stopped-upgrade-576453 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.253381972 -p stopped-upgrade-576453 stop: (1.513633689s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-576453 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-576453 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m5.050577539s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-524889 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-524889 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (28.547646951s)
--- PASS: TestNoKubernetes/serial/Start (28.55s)

                                                
                                    
x
+
TestPause/serial/Start (74.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-833318 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-833318 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m14.045027368s)
--- PASS: TestPause/serial/Start (74.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22081-785489/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-524889 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-524889 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.319291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (4.372557703s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-524889
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-524889: (1.340788171s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (68.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-524889 --driver=kvm2  --container-runtime=containerd
E1209 03:09:40.480373  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-524889 --driver=kvm2  --container-runtime=containerd: (1m8.335973861s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (68.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (68.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-833318 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-833318 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m8.364620803s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (68.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-893667 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-893667 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (160.136733ms)

                                                
                                                
-- stdout --
	* [false-893667] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:10:06.877583  825269 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:10:06.877730  825269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:10:06.877746  825269 out.go:374] Setting ErrFile to fd 2...
	I1209 03:10:06.877753  825269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:10:06.878100  825269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-785489/.minikube/bin
	I1209 03:10:06.878811  825269 out.go:368] Setting JSON to false
	I1209 03:10:06.880086  825269 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":31957,"bootTime":1765217850,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 03:10:06.880173  825269 start.go:143] virtualization: kvm guest
	I1209 03:10:06.882059  825269 out.go:179] * [false-893667] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 03:10:06.883400  825269 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 03:10:06.883419  825269 notify.go:221] Checking for updates...
	I1209 03:10:06.885700  825269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:10:06.888475  825269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-785489/kubeconfig
	I1209 03:10:06.889729  825269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-785489/.minikube
	I1209 03:10:06.891021  825269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 03:10:06.892386  825269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:10:06.894389  825269 config.go:182] Loaded profile config "NoKubernetes-524889": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1209 03:10:06.894649  825269 config.go:182] Loaded profile config "pause-833318": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 03:10:06.894808  825269 config.go:182] Loaded profile config "stopped-upgrade-576453": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I1209 03:10:06.894975  825269 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 03:10:06.945278  825269 out.go:179] * Using the kvm2 driver based on user configuration
	I1209 03:10:06.946492  825269 start.go:309] selected driver: kvm2
	I1209 03:10:06.946513  825269 start.go:927] validating driver "kvm2" against <nil>
	I1209 03:10:06.946529  825269 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:10:06.948605  825269 out.go:203] 
	W1209 03:10:06.949906  825269 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1209 03:10:06.951307  825269 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-893667 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-893667" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:09:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.92:8443
name: pause-833318
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:10:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.249:8443
name: stopped-upgrade-576453
contexts:
- context:
cluster: pause-833318
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:09:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-833318
name: pause-833318
- context:
cluster: stopped-upgrade-576453
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:10:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: stopped-upgrade-576453
name: stopped-upgrade-576453
current-context: stopped-upgrade-576453
kind: Config
users:
- name: pause-833318
user:
client-certificate: /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/pause-833318/client.crt
client-key: /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/pause-833318/client.key
- name: stopped-upgrade-576453
user:
client-certificate: /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/stopped-upgrade-576453/client.crt
client-key: /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/stopped-upgrade-576453/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-893667

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893667"

                                                
                                                
----------------------- debugLogs end: false-893667 [took: 3.90552531s] --------------------------------
helpers_test.go:175: Cleaning up "false-893667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-893667
--- PASS: TestNetworkPlugins/group/false (4.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-576453
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-576453: (1.806849634s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.81s)

                                                
                                    
x
+
TestISOImage/Setup (25.09s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-954861 --no-kubernetes --driver=kvm2  --container-runtime=containerd
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-954861 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (25.092836061s)
--- PASS: TestISOImage/Setup (25.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-524889 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-524889 "sudo systemctl is-active --quiet service kubelet": exit status 1 (182.591845ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which crictl"
I1209 03:19:47.118116  789441 config.go:182] Loaded profile config "enable-default-cni-893667": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestISOImage/Binaries/crictl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.20s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-833318 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-833318 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-833318 --output=json --layout=cluster: exit status 2 (223.999642ms)

                                                
                                                
-- stdout --
	{"Name":"pause-833318","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-833318","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.22s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-833318 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.01s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-833318 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-833318 --alsologtostderr -v=5: (1.004939002s)
--- PASS: TestPause/serial/PauseAgain (1.01s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-833318 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-833318 --alsologtostderr -v=5: (1.123281237s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (88.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-578160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-578160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m28.98973358s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (88.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (107.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-206394 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-206394 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (1m47.295663393s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (107.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (117.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-492643 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-492643 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m57.947193329s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (117.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-578160 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c74572a2-3704-4e34-89bf-1c798be30960] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c74572a2-3704-4e34-89bf-1c798be30960] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005042787s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-578160 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-578160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-578160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.15531657s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-578160 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (83.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-578160 --alsologtostderr -v=3
E1209 03:12:55.203975  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-578160 --alsologtostderr -v=3: (1m23.651477222s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (83.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-206394 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9858d3fe-046e-4117-a69e-fd1633578a20] Pending
helpers_test.go:352: "busybox" [9858d3fe-046e-4117-a69e-fd1633578a20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9858d3fe-046e-4117-a69e-fd1633578a20] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.006285688s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-206394 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-206394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-206394 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (71.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-206394 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-206394 --alsologtostderr -v=3: (1m11.137541243s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (71.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-492643 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4a6c9e83-8d93-404e-9c42-5b9c6b5a6724] Pending
helpers_test.go:352: "busybox" [4a6c9e83-8d93-404e-9c42-5b9c6b5a6724] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4a6c9e83-8d93-404e-9c42-5b9c6b5a6724] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.00507354s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-492643 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-492643 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-492643 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (85.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-492643 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-492643 --alsologtostderr -v=3: (1m25.346074096s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (85.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-578160 -n old-k8s-version-578160
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-578160 -n old-k8s-version-578160: exit status 7 (65.363018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-578160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-578160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
E1209 03:14:18.277985  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/addons-520986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-578160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (43.015223348s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-578160 -n old-k8s-version-578160
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-206394 -n no-preload-206394
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-206394 -n no-preload-206394: exit status 7 (68.334211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-206394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (43.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-206394 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1209 03:14:40.481238  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-206394 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (43.6481323s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-206394 -n no-preload-206394
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (43.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xk7nr" [cd9ddc08-db50-40e4-bbfa-2061d307c0ec] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xk7nr" [cd9ddc08-db50-40e4-bbfa-2061d307c0ec] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004801794s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xk7nr" [cd9ddc08-db50-40e4-bbfa-2061d307c0ec] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004781271s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-578160 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-578160 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-578160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-578160 -n old-k8s-version-578160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-578160 -n old-k8s-version-578160: exit status 2 (241.537709ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-578160 -n old-k8s-version-578160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-578160 -n old-k8s-version-578160: exit status 2 (229.635985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-578160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-578160 -n old-k8s-version-578160
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-578160 -n old-k8s-version-578160
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-947857 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-947857 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m25.795368501s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-6hjtj" [7333487a-cbc8-4e20-abe2-a6b3eb2c8b45] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004526752s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-492643 -n embed-certs-492643
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-492643 -n embed-certs-492643: exit status 7 (71.068653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-492643 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (62.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-492643 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-492643 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m2.185424613s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-492643 -n embed-certs-492643
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (62.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-6hjtj" [7333487a-cbc8-4e20-abe2-a6b3eb2c8b45] Running
E1209 03:15:18.660954  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-804291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004083731s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-206394 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-206394 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-206394 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-206394 -n no-preload-206394
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-206394 -n no-preload-206394: exit status 2 (220.282804ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-206394 -n no-preload-206394
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-206394 -n no-preload-206394: exit status 2 (230.725612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-206394 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-206394 -n no-preload-206394
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-206394 -n no-preload-206394
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-235453 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-235453 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (1m5.086017331s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (121.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m1.598352947s)
--- PASS: TestNetworkPlugins/group/auto/Start (121.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bck8x" [d14f7adb-3758-4b51-ab46-d521b77ae7ad] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005013544s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bck8x" [d14f7adb-3758-4b51-ab46-d521b77ae7ad] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00409255s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-492643 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-492643 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-492643 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-492643 -n embed-certs-492643
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-492643 -n embed-certs-492643: exit status 2 (257.872614ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-492643 -n embed-certs-492643
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-492643 -n embed-certs-492643: exit status 2 (265.192496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-492643 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-492643 -n embed-certs-492643
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-492643 -n embed-certs-492643
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-235453 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-235453 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.394080795s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m1.704051431s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-235453 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-235453 --alsologtostderr -v=3: (3.145908492s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-235453 -n newest-cni-235453
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-235453 -n newest-cni-235453: exit status 7 (74.529567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-235453 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (48.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-235453 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-235453 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (48.583232265s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-235453 -n newest-cni-235453
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (48.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-947857 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [48806e19-fed6-477c-97b4-77bac78b0e2d] Pending
helpers_test.go:352: "busybox" [48806e19-fed6-477c-97b4-77bac78b0e2d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [48806e19-fed6-477c-97b4-77bac78b0e2d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.006003205s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-947857 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-947857 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-947857 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.240958191s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-947857 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (80.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-947857 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-947857 --alsologtostderr -v=3: (1m20.84534024s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (80.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-235453 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-235453 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-235453 -n newest-cni-235453
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-235453 -n newest-cni-235453: exit status 2 (219.392313ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-235453 -n newest-cni-235453
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-235453 -n newest-cni-235453: exit status 2 (219.007985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-235453 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-235453 -n newest-cni-235453
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-235453 -n newest-cni-235453
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m13.31716195s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-s4bz7" [971c616b-0167-41b1-a60f-19968c7712ca] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00619837s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-893667 "pgrep -a kubelet"
E1209 03:17:35.477033  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:17:35.483484  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:17:35.494949  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:17:35.516741  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1209 03:17:35.556944  789441 config.go:182] Loaded profile config "auto-893667": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-893667 replace --force -f testdata/netcat-deployment.yaml
E1209 03:17:35.558372  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:17:35.639838  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:17:35.801482  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1209 03:17:35.984596  789441 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1209 03:17:35.984865  789441 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
E1209 03:17:36.123194  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1209 03:17:36.149544  789441 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-49tk5" [088a9f3d-35d9-44cc-9003-05882050e040] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 03:17:36.764997  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:17:38.047199  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-49tk5" [088a9f3d-35d9-44cc-9003-05882050e040] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005406865s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-893667 "pgrep -a kubelet"
E1209 03:17:40.609515  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1209 03:17:40.612537  789441 config.go:182] Loaded profile config "kindnet-893667": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-893667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ksd5m" [a57e6b8a-adbd-4184-9d78-30445f115564] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ksd5m" [a57e6b8a-adbd-4184-9d78-30445f115564] Running
E1209 03:17:45.731813  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006372885s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-893667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-893667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m14.103570624s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E1209 03:18:07.653361  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:07.659788  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:07.671270  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:07.692760  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:07.734865  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:07.816441  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m40.132851547s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-947857 -n default-k8s-diff-port-947857
E1209 03:18:07.978171  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-947857 -n default-k8s-diff-port-947857: exit status 7 (85.422898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-947857 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (81.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-947857 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2
E1209 03:18:08.300163  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:08.941869  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:10.224184  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:12.786453  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:16.457898  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:17.908632  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:18:28.150992  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-947857 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m21.358627256s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-947857 -n default-k8s-diff-port-947857
E1209 03:19:29.594995  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (81.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-zmmws" [172a6e81-1d8c-4743-bc02-3f4d62dc64b1] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-zmmws" [172a6e81-1d8c-4743-bc02-3f4d62dc64b1] Running
E1209 03:18:48.633028  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/no-preload-206394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007560133s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-893667 "pgrep -a kubelet"
I1209 03:18:49.362560  789441 config.go:182] Loaded profile config "calico-893667": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-893667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7lwfm" [5bcee0f0-6824-449e-9993-5914910b9802] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7lwfm" [5bcee0f0-6824-449e-9993-5914910b9802] Running
E1209 03:18:57.419873  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/old-k8s-version-578160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004908897s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-893667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-893667 "pgrep -a kubelet"
I1209 03:19:16.338876  789441 config.go:182] Loaded profile config "custom-flannel-893667": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-893667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pfjbc" [92cd7413-9353-4f64-a644-d6ec7af95892] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pfjbc" [92cd7413-9353-4f64-a644-d6ec7af95892] Running
E1209 03:19:23.556115  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004568904s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m15.495036959s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-893667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ts6t5" [01e2dd9a-6dbb-4b07-a018-ead42b710fde] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ts6t5" [01e2dd9a-6dbb-4b07-a018-ead42b710fde] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00616248s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ts6t5" [01e2dd9a-6dbb-4b07-a018-ead42b710fde] Running
E1209 03:19:40.480731  789441 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/functional-230202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00436214s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-947857 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-947857 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-947857 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-947857 -n default-k8s-diff-port-947857
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-947857 -n default-k8s-diff-port-947857: exit status 2 (297.171188ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-947857 -n default-k8s-diff-port-947857
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-947857 -n default-k8s-diff-port-947857: exit status 2 (286.376209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-947857 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-947857 -n default-k8s-diff-port-947857
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-947857 -n default-k8s-diff-port-947857
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-893667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m27.464644149s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-893667 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-893667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bp4rq" [af0c648d-f951-4853-b099-83a50c58da71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bp4rq" [af0c648d-f951-4853-b099-83a50c58da71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005646306s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.20s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.19s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   commit: 0d7c1d9864cc7aa82e32494e32331ce8be405026
iso_test.go:118:   iso_version: v1.37.0-1765151505-21409
iso_test.go:118:   kicbase_version: v0.0.48-1764843390-22032
iso_test.go:118:   minikube_version: v1.37.0
--- PASS: TestISOImage/VersionJSON (0.19s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-954861 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-893667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nvfml" [1fea02cb-dd9a-405d-9fba-b5d3d04856f6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004288351s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-893667 "pgrep -a kubelet"
I1209 03:20:40.790857  789441 config.go:182] Loaded profile config "flannel-893667": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-893667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nb58p" [7647056b-03db-4591-bead-e3a6a7802313] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nb58p" [7647056b-03db-4591-bead-e3a6a7802313] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003444376s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-893667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-893667 "pgrep -a kubelet"
I1209 03:21:11.583334  789441 config.go:182] Loaded profile config "bridge-893667": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-893667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lrcft" [e2698b0f-b587-49f4-ae7e-307cf6e3a536] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lrcft" [e2698b0f-b587-49f4-ae7e-307cf6e3a536] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004932066s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-893667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-893667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (51/437)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
367 TestStartStop/group/disable-driver-mounts 0.17
379 TestNetworkPlugins/group/kubenet 4.16
388 TestNetworkPlugins/group/cilium 4.03
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:819: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:543: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1093: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-988306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-988306
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-893667 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-893667" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:09:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.92:8443
name: pause-833318
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:09:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.249:8443
name: stopped-upgrade-576453
contexts:
- context:
cluster: pause-833318
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:09:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-833318
name: pause-833318
- context:
cluster: stopped-upgrade-576453
user: stopped-upgrade-576453
name: stopped-upgrade-576453
current-context: ""
kind: Config
users:
- name: pause-833318
user:
client-certificate: /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/pause-833318/client.crt
client-key: /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/pause-833318/client.key
- name: stopped-upgrade-576453
user:
client-certificate: /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/stopped-upgrade-576453/client.crt
client-key: /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/stopped-upgrade-576453/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-893667

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893667"

                                                
                                                
----------------------- debugLogs end: kubenet-893667 [took: 3.960007286s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-893667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-893667
--- SKIP: TestNetworkPlugins/group/kubenet (4.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-893667 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-893667" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-785489/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:09:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.92:8443
name: pause-833318
contexts:
- context:
cluster: pause-833318
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:09:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-833318
name: pause-833318
current-context: ""
kind: Config
users:
- name: pause-833318
user:
client-certificate: /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/pause-833318/client.crt
client-key: /home/jenkins/minikube-integration/22081-785489/.minikube/profiles/pause-833318/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-893667

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-893667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893667"

                                                
                                                
----------------------- debugLogs end: cilium-893667 [took: 3.82374533s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-893667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-893667
--- SKIP: TestNetworkPlugins/group/cilium (4.03s)

                                                
                                    
Copied to clipboard